I do also not understand the advantage of using the minor axis. Using the major axis has the following advantages:abdullah wrote:Ok. I do not fully follow the advantages of using minor instead of major, but for me it is ok. Could be agree to one or the other, so that we do not have to recalculate the partials next week because of changing from one to the other? Ulrich, as you proposed first, what do you think?
I have a more human-readable way to tell this: addConstraintXXX is supposed to remove exactly one degree of freedom. UI constraint "coincident" has to remove two degrees of freedom, so has two addConstraintXXX calls. Correct?abdullah wrote:If the constraints are fully independent (the system matrix formed by the gradients has independent rows for each, i.e. if the rank of the matrix produced by the several separate constraints is equal to the number of constraints.), then you can certainly apply two or more solver constraints to implement an UI constraint, and even more, you are encouraged to do so, as this simplifies the code. OIne simple example is the coincident constraint, implemented as two equality constraints:
int System::addConstraintP2PCoincident(Point &p1, Point &p2, int tagId)
addConstraintEqual(p1.x, p2.x, tagId);
return addConstraintEqual(p1.y, p2.y, tagId);
Code: Select all
A=3; B=2; base_npts = 100; nptsA = A*base_npts; nptsB = B*base_npts; px = 2*(.5+(-nptsA:nptsA))/base_npts; Px = ones(2*nptsB+1,1)*px; py = 2*(.5+(-nptsB:nptsB))/base_npts; Py = py'*ones(1,2*nptsA+1); a=-(1i*B^2-1i*A^2)/4; b=-(B*Py+1i*A*Px)/2; c=0; d=-(B*Py-1i*A*Px)/2; e=(1i*B^2-1i*A^2)/4; th = pi*(0:360)/180; p = -3*b.^2./(8*a.^2); q = (b.^3+8*a.^2.*d)./(8*a.^3); dta0 = real(12*a.*e-3*b.*d); dta1 = real(27*b.^2.*e+27*a.*d.^2); QP1 = (dta1.^2-4*dta0.^3); Qcube = (dta1+sqrt(QP1))/2; Q = nthroot(real(Qcube),3).*double(imag(Qcube)==0) + (Qcube).^(1/3).*double(~(imag(Qcube)==0)); S = 1/2*sqrt(-2*p/3+(Q+dta0./Q)./(3*a)); R1 = -4*S.^2-2*p+q./S; R2 = -4*S.^2-2*p-q./S; X1 = -b./(4*a) - S + 1/2*sqrt(R1); X2 = -b./(4*a) + S + 1/2*sqrt(R2); X3 = -b./(4*a) - S - 1/2*sqrt(R1); X4 = -b./(4*a) + S - 1/2*sqrt(R2); compl = ... X1.*double( (Px > 0 & QP1 > 0 ) | (Py >0 & QP1 < 0 ) ) + ... X4.*double(~( (Px > 0 & QP1 > 0 ) | (Py >0 & QP1 < 0 ) )) ; T=-1i*log( compl ); figure(1); image(px,py,64/(2*pi)*(pi+real(T))); axis equal; hold on; plot(A*cos(th),B*sin(th),'g','LineWidth',2); plot([sqrt(A^2-B^2) -sqrt(A^2-B^2)],[0 0],'g*','LineWidth',2);
This is brilliant! Here's how I see it:DevJohan wrote:To avoid the solution jumping from one point on the ellipse and circle to another point on the ellipse and circle I think some kind of extra data is required. An intermediate line is one possibility where you constrain one end of the line to the ellipse and circle curves and add tangency constraints ellipse-line and circle-line.
This tangency point does not necessarily have to be visible, but it can be quite useful if it is.
Maybe because I don't know how to describe a circle with a center offset from origin in polar coordinates ... I'm doing my best! I'll start another thread in my brain for "perifocal frame", I haven't thought about it.marktaff wrote:In reading this thread, I've seen you guys beat yourselves up doing this conic section math in cartesian coordinates instead of in polar coordinates where it is much easier, perhaps in a perifocal frame. Is there a reason we aren't using polar forms, and just converting givens/results to and from cartesian forms as required?