Why this code converge?
p any composite number
for small p, inner loop may be less than 250, its heuristic) [CODE]{p=1237*1234577; c=ceil(sqrt(p)); for(n=c,c+10000, b=lift(Mod(n^2,p));a=lift(Mod(b^2,p)); for(y=1,250, t=ceil(sqrt(b^2a)); b=lift(Mod(t^2,p));a=lift(Mod(b^2,p)); if(b<c,break());); localprec(7); b=b/c/1.; if(b<.1,print(b);););}[/CODE] Regards, Roman 
Easy to see that t in this code is coordinate of the lower point of some "theeth of the saw" on the graf x^2 mod p. (t1 upper)
In second cycle we just make jumps to next such "theeth" and go on. Curiosly, this process have two outcome  some closed loop, ring, and zero, or sub sqrt value of t^2 mod p one "jump" before. I dont known, new or old one this heuristic. But its look like a sieve for n values, and hypothetically can point out on the some dependency in x^2 mod p 
Most interesting, we can try the same method (saw jumps) for x^a mod p, when a>2, and (with hope) find such x, x^a mod p <p^(a)
Dreams, only dreams of course)) The curse of sqrt too strong. 
Once again, Why does it work? (Personally I don't know yet))
[CODE] \p300 {p=233108530344407544527637656910680524145619812480305449042948611968495918245135782867888369318577116418213919268572658314913060672626911354027609793166341626693946596196427744273886601876896313468704059066746903123910748277606548649151920812699309766587514735456594993207; c=ceil(sqrt(p)); for(n=1,p, u=c+n; \\initial values b=lift(Mod(u^2,p)); a=lift(Mod(b^2,p)); for(y=1,250, \\The riddle is here, in this cycle. t=ceil((b^2a)^(1/2)); b=lift(Mod(t^2,p)); a=lift(Mod(b^2,p)); if(b<c, break());); \\break at sub sqrt residual localprec(7); z=(b/c/1.); if(z<1,print(z," ",t)); );} [/CODE] 
No one here? I'm talking with themselves)) Let [B]p[/B]=p+1260, and for the same t from code above, residuals will be *slight* different, and[B] p[/B] is prime.
How small can be residuals in this case? Or by other word, why we can't find the small residuals for prime [B]p[/B], and they still small for our composite p? 
Okay, I'm here. I've been watching occasionally since the first post.
Could you explain, what it is supposed to do, what does it do, and what is the question? 
Factorization of numbers.

[QUOTE=RomanM;584104]Factorization of numbers.[/QUOTE]
That's too vague. Please explain the code in the three questions asked. 1. What is it supposed to do? > Describe the code and algorithm step by step. 2. What does it do? > You may skip this part if the experienced behaviour is the same as expected behaviour. 3. What is the question? > Explain the question "Why this code converge?", i.e. what do you mean by that, and how should an answer look like. 
Ok!
1. Code find the values of t>sqrt(p) (p  any number. Can be prime or composite), for those mod(t^2,p)<sqrt(p) and do this in a [B]very[/B] unusual way, far away from common approach. Algorithm is quite simple. Take some integer u>sqrt(p), b=mod(u^2,p); a=mod(b^2,p)=mod(u^4,p); [From (by)^2==0 mod p b^22*b*y+y^2==0 mod p or a2*b*y+y^2==0; Solution: y=bsqrt(b^2a) (and y=b+sqrt(b^2a), using first) Make y an integer, and compute t= by =ceil(sqrt(b^2a))] So t=ceil(sqrt(b^2a)). t is [B]some[/B] integer) Let u=t, and go all this again, in cycle. After few step, the value of b became less than sqrt(p) (or cycle go to some ring) 2. See 3. 3. why the hell does this even work??? 
[QUOTE=RomanM;584239]
3. why the hell does this even work???[/QUOTE] Well, how do you know that it [I]actually [/I]works? Did you try it on all input values up to some limit and then declare "I tested it for all values up to 10^9 and it converged every time". Maybe if you did that, then there could have been some discussion? Before you did that how would you convince people that there is anything worth spending their time or even [I]considering [/I]spending time reading past 1st post? All you are showing is "I tested this on one number! Look!" Everyone says: "meh" and moves on to reading something else. 
Write [$]\left\lceil \sqrt{b^2a} \right\rceil[/$] as [$]\sqrt{b^2a}+\epsilon[/$], where [$]\epsilon[/$] is between 0 and 1. Let's see what happens when we square this. We get [$]b^2a+2\epsilon\sqrt{b^2a}+\epsilon^2[/$].
We know that [$]b^2a[/$] is a multiple of p, so the value of b on the next iteration is at most (and turns out to be equal to) [$]2\epsilon\sqrt{b^2a}+\epsilon^2[/$], which is around [$]2\epsilon b[/$] (until b gets close to sqrt(p), when it will tend to be smaller; for b < sqrt(p) it will be 0). In other words, we multiply b by [$]2\epsilon[/$] to get the new value of b. If [$]\epsilon[/$] behaved like a uniformly random number between 0 and 1, then we would expect values of b to decrease in the long term (exercise: what is the expected rate of decrease?). From the Taylor expansion we see that [$]\epsilon[/$] is roughly equal to the fractional part of [$]\frac{a}{2b}[/$]. For small b this does behave essentially like a uniformly random number between 0 and 1; for some larger b it is skewed towards the smaller end, but this still means we expect b to fall. There may be a neater explanation; this is just the first thing I came up with. 
All times are UTC. The time now is 02:19. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.