Name: C Y Date: 05/20/04-01:50:49 AM Z


-– Tim Daly <[daly_at_HIDDEN-E-MAIL]> wrote:
> I know this is a research problem though hardly one that merits
> papers on the subject, I guess.

Actually it probably does, although I’m not quite sure in what field.

> My goal isn’t to solve physics/math problems. My goal is to build
> a system that will be used by computational mathematicians 30
> years from now. Once this is the stated goal several things become
> clear.

[snip]

> First, claims are made which cannot be reproduced. Citing results of
> the program runs without presenting the programs is equivalent to
> citing theorems without providing proofs. How can a referee properly
> review such work? Physics and chemistry require reproduced results
> before claims are accepted.

A minor but I think important point is that such concerns are a very
sound argument for making a computer algebra system portable across
multiple computer systems and lisp implimentations. Even a properly
documented mathematical program depends on the proper (or at least
expected) behavior from the software on top of which it runs. Getting
identical results from different lisps/operating systems/platforms is a
good check that all levels of the system are operating as expected when
solving a problem.

> Second, the programs are either not available or published as
> contributions. In the first case who is to know if the actual reason
> for an algorithmic speedup turns out to be a compiler switch rather
> than some theoretical reason like term ordering in a groebner basis
> computation? Since it is unpublished the code is likely to die thus
> undermining both the basis for the claim and the possibility that
> other researchers can build on the work.

Indeed.

> The second case is even worse in some sense. I have 1100 domains
> in Axiom (some of which I wrote) and 100+ algorithms in Magnus
> with no theoretical documentation; indeed most have no documentation
> at all. In the 30 year view how is the next generation supposed to
> build upon the work we’ve done so far? How can they see the evolution
> of algorithms? How can they maintain the code without the theory?

I’d say its not worse, but it does replace “impossible” with “extremely
difficult”. Closed and vanished code can never be puzzled out,
undocumented code can be figured out if the motivation is sufficiently
strong. In most cases it will not be, and the work will still be lost,
so I agree the practical results in the two cases will be similar.

> Axiom represents over 30 years and over 300 man-years of research.
> I don’t believe that there will be funding to build systems that are
> this large and this general. Even if one funded such an effort we
> end up with a lot of rework that virtually no-one wants to do.

Indeed, funding would be made more difficult by the existance of
commercial competitors as well - funding for re-inventing the wheel
would be even harder to find than someone willing to do the work.

> So I’m proposing a goal for the 30 year horizon. We need to make an
> effort to collect the theory and the code and reunite the two. I
> realize that there are issues.
>
> One issue is, as you point out, that code has to deal with grubby
> details which the theory can skip. But real design choices are made
> when reducing theory to practice and these design choices greatly
> affect the results. We need to encourage the practice of explaining
> these design decisions.

Yes.

> For example, how are infinite objects (like
> groups) represented? We have learned that in simple domains like
> polynomials there are a wide range of design choices (dense, sparse,
> recursive, etc) that are appropriate for different problems.
>
> Another issue is that current systems don’t “reach up” close enough
> to the theory. The gap between the theory and the implementation (I
> call it the impedance mismatch) is too large for most systems. For
> instance, Magnus is implemented in C++ which is WAY too close to the
> machine and very, very far away from Infinite Group Theory (the
> Magnus domain). Thus the burden of crossing this gap falls on the
> programmer. Systems like Axiom are much closer to the mathematics.
> But not close enough. We need systems that span this gap in carefully
> structured ways so we can be efficient without being obscure.

Agreed. I would argue, though, that the priority list should look
something like this:

1. Well documented and understood.
2. Efficient (without being platform specific)

Computers are fast enough in this day and age that the far more
important problem is to understand what is happening. If an application
demands sacrificing portability and clarity for speed, that’s where a
special purpose program is useful. Then perhaps Axiom could output the
logic that solves the problem on the Axiom level in some programming
language, and let the programmers fiddle from that point.

> This is one of the root causes of your comment that “the
> practical implementation of the algorithm is often connected to the
> published algorithm in complicated ways”. The implemented algorithm
> should not be much longer than the published one.

Well, at the very least an implimentation of the algorithm is likely to
need to tell Axiom things assumed in the paper. I suspect this would
be one of the sticky points in this type of work - implicit
assumptions made through much of a subfield, but which are hard for
newcomers to find and adsorb. By the same token, it could prove
helpful to be able to easily look back on what those assumptions are.

> If we look at the 30 year horizon it is clear that all papers in
> computational mathematics will be online. We must set standards
> now, or at least strive for good examples, that make it possible
> to use the research effectively. In today’s terms we should be
> able to “drag and drop” a computational mathematics paper onto
> a system like Axiom and have it immediately available. (In 30
> year terms Axiom should know the “intentional stance” of the
> researcher and automatically incorporate the algorithms).

Which brings up the issue of making sure a paper knows what to tell
Axiom. I suspect what might happen (and might be a good way of doing
it) is that Axiom based papers will reference other Axiom based papers
as dependancies for their paper. (I guess this is what you’ve had in
mind Tim?) In some sense, perhaps it would even be possible to examine
the implications of some alternative algorithm on other papers - i.e.
are the results obtained using *new algorithm* in the papers that
depend on *old algorithm* impacted by the change?

[snip]

> My current religious zealotism and wild-eyed, irrational planning
> (I admit it’s over-the-top-painful) claim is that we need to start
> with an old idea “Literate Programming” and evolve it to suit the
> needs of the next generation Computational Mathematician. Thus
> all of Axiom (and soon Magnus) has been rewritten into TeX documents.
> There are no C, Lisp, Spad, Makefile, etc files. Now I’m trying to
> ensure that new code added to the system includes the theory (or
> at least permission to use the paper so I can write the literate
> document).

Two questions:

1) Are you hoping to eventually integrate Magnus’s capabilities into
Axiom?

2) Is there some systematic approach that should be put in place for
how to go back and document what’s already there? I.E., start with the
most basic code and document one’s way up the capabilities?

CY

        
                
__________________________________
Do you Yahoo!?
Yahoo! Domains – Claim yours for only $14.70/year
http://smallbusiness.promotions.yahoo.com/offer