Method to test if a number is a perfect power? The Next CEO of Stack OverflowDetecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers

How do spells that require an ability check vs. the caster's spell save DC work?

Visit to the USA with ESTA approved before trip to Iran

Only print output after finding pattern

Is HostGator storing my password in plaintext?

Why did we only see the N-1 starfighters in one film?

% symbol leads to superlong (forever?) compilations

Why do professional authors make "consistency" mistakes? And how to avoid them?

Fastest way to shutdown Ubuntu Mate 18.10

Does the Brexit deal have to be agreed by both Houses?

How do I get the green key off the shelf in the Dobby level of Lego Harry Potter 2?

How to make a variable always equal to the result of some calculations?

How to safely derail a train during transit?

Would this house-rule that treats advantage as a +1 to the roll instead (and disadvantage as -1) and allows them to stack be balanced?

How to start emacs in "nothing" mode (`fundamental-mode`)

Apart from "berlinern", do any other German dialects have a corresponding verb?

Is it a good idea to use COLUMN AS (left([Another_Column],(4)) instead of LEFT in the select?

Is it my responsibility to learn a new technology in my own time my employer wants to implement?

Return the Closest Prime Number

What is the difference between "behavior" and "behaviour"?

Can I equip Skullclamp on a creature I am sacrificing?

Increase performance creating Mandelbrot set in python

Is it okay to store user locations?

What is the purpose of the Evocation wizard's Potent Cantrip feature?

Why here is plural "We went to the movies last night."



Method to test if a number is a perfect power?



The Next CEO of Stack OverflowDetecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers










6












$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$







  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    yesterday










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    yesterday






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    yesterday










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    yesterday






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    yesterday
















6












$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$







  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    yesterday










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    yesterday






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    yesterday










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    yesterday






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    yesterday














6












6








6


3



$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$




Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).







number-theory perfect-powers






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited yesterday









Chase Ryan Taylor

4,45521531




4,45521531










asked yesterday









D.B.D.B.

1,30518




1,30518







  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    yesterday










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    yesterday






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    yesterday










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    yesterday






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    yesterday













  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    yesterday










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    yesterday






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    yesterday










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    yesterday






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    yesterday








2




2




$begingroup$
One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
$endgroup$
– Alex R.
yesterday




$begingroup$
One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
$endgroup$
– Alex R.
yesterday












$begingroup$
Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
$endgroup$
– Servaes
yesterday




$begingroup$
Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
$endgroup$
– Servaes
yesterday




1




1




$begingroup$
@Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
$endgroup$
– D.B.
yesterday




$begingroup$
@Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
$endgroup$
– D.B.
yesterday












$begingroup$
Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
$endgroup$
– D.B.
yesterday




$begingroup$
Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
$endgroup$
– D.B.
yesterday




5




5




$begingroup$
@D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
$endgroup$
– Alex R.
yesterday





$begingroup$
@D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
$endgroup$
– Alex R.
yesterday











4 Answers
4






active

oldest

votes


















17












$begingroup$

See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



https://cr.yp.to/papers/powers-ams.pdf






share|cite|improve this answer









$endgroup$




















    2












    $begingroup$

    In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



    $$k = a^n tag1labeleq1$$



    where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



    If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



    $$a = sqrt[n]k tag2labeleq2$$



    As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



    Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



    $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



    As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



    Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



    You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



    As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






    share|cite|improve this answer











    $endgroup$








    • 1




      $begingroup$
      you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
      $endgroup$
      – miracle173
      yesterday










    • $begingroup$
      I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
      $endgroup$
      – Solomon Ucko
      yesterday







    • 1




      $begingroup$
      @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
      $endgroup$
      – John Omielan
      yesterday











    • $begingroup$
      @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
      $endgroup$
      – John Omielan
      7 hours ago


















    0












    $begingroup$

    My suggestion on a computer is to run a root finder.



    Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



    You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






    share|cite|improve this answer









    $endgroup$












    • $begingroup$
      I don't think it's linear, given that you need to square the proposed number at every split.
      $endgroup$
      – Alex R.
      yesterday











    • $begingroup$
      @AlexR. you are right
      $endgroup$
      – gt6989b
      11 hours ago


















    0












    $begingroup$

    It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



    We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



    We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3165146%2fmethod-to-test-if-a-number-is-a-perfect-power%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      17












      $begingroup$

      See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



      https://cr.yp.to/papers/powers-ams.pdf






      share|cite|improve this answer









      $endgroup$

















        17












        $begingroup$

        See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



        https://cr.yp.to/papers/powers-ams.pdf






        share|cite|improve this answer









        $endgroup$















          17












          17








          17





          $begingroup$

          See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



          https://cr.yp.to/papers/powers-ams.pdf






          share|cite|improve this answer









          $endgroup$



          See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



          https://cr.yp.to/papers/powers-ams.pdf







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered yesterday









          Alex J BestAlex J Best

          2,43611227




          2,43611227





















              2












              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






              share|cite|improve this answer











              $endgroup$








              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                yesterday










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                yesterday







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                yesterday











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                7 hours ago















              2












              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






              share|cite|improve this answer











              $endgroup$








              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                yesterday










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                yesterday







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                yesterday











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                7 hours ago













              2












              2








              2





              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






              share|cite|improve this answer











              $endgroup$



              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited 7 hours ago

























              answered yesterday









              John OmielanJohn Omielan

              4,3492215




              4,3492215







              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                yesterday










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                yesterday







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                yesterday











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                7 hours ago












              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                yesterday










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                yesterday







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                yesterday











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                7 hours ago







              1




              1




              $begingroup$
              you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
              $endgroup$
              – miracle173
              yesterday




              $begingroup$
              you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
              $endgroup$
              – miracle173
              yesterday












              $begingroup$
              I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
              $endgroup$
              – Solomon Ucko
              yesterday





              $begingroup$
              I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
              $endgroup$
              – Solomon Ucko
              yesterday





              1




              1




              $begingroup$
              @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
              $endgroup$
              – John Omielan
              yesterday





              $begingroup$
              @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
              $endgroup$
              – John Omielan
              yesterday













              $begingroup$
              @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
              $endgroup$
              – John Omielan
              7 hours ago




              $begingroup$
              @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
              $endgroup$
              – John Omielan
              7 hours ago











              0












              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$












              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                yesterday











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                11 hours ago















              0












              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$












              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                yesterday











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                11 hours ago













              0












              0








              0





              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$



              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered yesterday









              gt6989bgt6989b

              35.1k22557




              35.1k22557











              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                yesterday











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                11 hours ago
















              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                yesterday











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                11 hours ago















              $begingroup$
              I don't think it's linear, given that you need to square the proposed number at every split.
              $endgroup$
              – Alex R.
              yesterday





              $begingroup$
              I don't think it's linear, given that you need to square the proposed number at every split.
              $endgroup$
              – Alex R.
              yesterday













              $begingroup$
              @AlexR. you are right
              $endgroup$
              – gt6989b
              11 hours ago




              $begingroup$
              @AlexR. you are right
              $endgroup$
              – gt6989b
              11 hours ago











              0












              $begingroup$

              It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



              We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



              We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






              share|cite|improve this answer









              $endgroup$

















                0












                $begingroup$

                It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



                We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






                share|cite|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



                  We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                  We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






                  share|cite|improve this answer









                  $endgroup$



                  It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



                  We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                  We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered yesterday









                  miracle173miracle173

                  7,38022247




                  7,38022247



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3165146%2fmethod-to-test-if-a-number-is-a-perfect-power%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      រឿង រ៉ូមេអូ និង ហ្ស៊ុយលីយេ សង្ខេបរឿង តួអង្គ បញ្ជីណែនាំ

                      QGIS export composer to PDF scale the map [closed] Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?Print Composer QGIS 2.6, how to export image?QGIS 2.8.1 print composer won't export all OpenCycleMap base layer tilesSave Print/Map QGIS composer view as PNG/PDF using Python (without changing anything in visible layout)?Export QGIS Print Composer PDF with searchable text labelsQGIS Print Composer does not change from landscape to portrait orientation?How can I avoid map size and scale changes in print composer?Fuzzy PDF export in QGIS running on macSierra OSExport the legend into its 100% size using Print ComposerScale-dependent rendering in QGIS PDF output

                      PDF-ში გადმოწერა სანავიგაციო მენიუproject page