For a detailed list of the latest changes see the Subversion change log.
factorsSquarefreeHensel()
in class
FactorInteger
. Improved multivariate Hensel lifting in
class HenselMultUtil
. Small improvements and fixes.
complexAlgebraicRoots()
in class
PolyUtilApp
. Simple isolating interval refinement for
real and complex roots. Alternative factoring of univariate
polynomials over algebraic number fields via prime ideal
decomposition in class FactorAlgebraicPrim
. Improved
parsing of complex numbers. Forced term orders in some situations
and further small improvements and fixes.
RealAlgebraicNumber
and RealAlgebraicRing
in package edu.jas.application
. New experimental
RootFactory
with methods to compute complex roots for
polynomials with coefficients in some complex algebraic extension
field of the rational numbers. Uses the respective classes form
package edu.jas.root
in a recursive setting.
New generic factorization classes FactorRealAlgebraic
and FactorRealReal
. Small improvement for reduced / minimal
Gröbner base computation.
ResidueRing
. Jruby and Jython versions and examples of
the extension field builder. Small improvements and bug fixes for
latest Eclipse and Java 1.7 version.
ExtensionFieldBuilder
with methods for algebraic and
transcendental field extensions. Improvements in real and complex
algebraic numbers. Improved polynomial parser for recursive
representations. Small bug fixes.
HenselMultUtil
for multivariate Hensel lifting.
Will be used in polyomial factorization in the future.
Some parts of greatest common divisor using multivariate Hensel lifting.
The JAS source (r3408) compiles on Apache Harmony 6.0 (r991881). The
unit tests pass with the exception of test cases involving object
serialization.
toScript()
methods rewritten to reflect the Ruby
language requirements and to differentiate between Ruby and Python.
More precise exceptions for modular computations to return also the
discovered factors.
GenPolynomialTokenizer
to RingFactoryTokenizer
. Some artificial code was
required to use solvable polynomials as ring elements since solvable
polynomials cannot implement
RingElem<GenSolvablePolynomial<C>>
. This
resulted in some cases in wrong method dispatch for the
multiply()
method due to compiler optimizations. A work
around to detect and repair this is now implemented in class
GenPolynomial
.
OrderedSyzPairlist
. Refactoring of Gröbner base
classes to optionally use the new pair selection. Back port of some
JDK 1.6 constructs to be again compatible with JDK 1.5. Small
improvements in Kronecker factor combination in class
FactorAbstract
. Fixed race condition in
ThreadPool
and improved termination detection in
Terminator
. Fixes in parallel reduced Gröbner base
computations. Fixed univariate polynomial construction in
Ideal
.
Quotient*
to package edu.jas.ufd
.
Fixed small bugs and cosmetic.
MultiVarPowerSeries
and MultiVarPowerSeriesRing
. Mora's tangent cone reduction
algorithm and standard base computation for power series in package
edu.jas.ps
. Iterator over exponent vectors.
BigDecimal
coefficients. Removed the use of the underscore for algebraic number
coefficients in the polynomial parser. Now every recursive call of
parse()
from a ring factory is triggered by braces which
can be nested to any depth. Fixed synchronization bug in solvable
polynomial relation tables and a parallelization bug in parallel
solvable polynomial Gröbner base computation. Use of unbounded
thread pools to avoid dead-locks. Added remaining parts for the
square-free decomposition in polynomial rings of characteristic p
> 0. Changed the script representation of AN
(AlgebraicNumber
s).
edu.jas.kern.TimeStatus
to provide user feedback for
long running tasks via method checkTime()
.
Implemented some missing extGB()
methods. GBFactory
for
the selection of appropriate Gröbner base implementations.
New method isFinite()
for all ElemFactory
s and usage
in SquarefreeFactory
. Added some missing parts for the factorization
in polynomial rings of characteristic p > 0 and ideal decomposition.
GBFactory
.
New GBProxy
like GCDProxy
to run a sequential and a parallel
Gröbner base computation concurrently.
Primitive element computation via normalPositionFor()
in
methods primitiveElement()
together with several
conversion methods convertToPrimitiveElem()
. A new
index of all algorithms from the book
Gröbner bases to their JAS equivalents.
Ideal
with methods
radicalDecomposition()
,
decomposition()
,
primeDecomposition()
and
primaryDecomposition()
.
Computation of extension and contraction ideals.
Unit tests for the decomposition methods.
Fixed a bug in multivariate polynomial factorization in Kronecker's method.
Fixed a bug in squarefree decomposition in inseparable case.
Added NO_THREADS
flag to edu.jas.kern.ComputerThreads
to avoid (some) thread creation for usage in Google app engine.
Ideal
with methods
zeroDimRadicalDecomposition()
,
zeroDimDecomposition()
,
zeroDimPrimeDecomposition()
,
zeroDimPrimaryDecomposition()
and
zeroDimRootDecomposition()
. Exact 0-dim ideal real root
computation and approximation in methods
PolyUtilApp.realAlgebraicRoots()
and
decimalApproximation()
. Small enhancements to Javadoc
comments.
edu.jas.ufd
,
simplified and improved the factory classes. Refactorings of
parallel Gröbner bases computations for solvable polynomial
rings. Improved logging for distributed Gröbner bases and
distributed middle-ware.
PrimeList
.
Construction of minimal univariate polynomials in zero dimensional ideals.
Supersets of complex and real roots of zero dimensional ideals. More unit tests for
real and complex roots of univariate polynomials and zero dimensional ideal roots.
Minor enhancements and fixes.
GroebnerBasePartial
.
Small enhancements: polynomial recursive coefficient parser and a
matrix parser. Book-keeping of all used polynomial variable names.
New and improved unit tests.
Complex
coefficients via algebraic
coefficients in Q(i).
New classes ModLong
and ModLongRing
for faster modular arithmetic.
New interfaces Modular
and ModularRingFactory
implemented also by
ModInteger
to make both interchangable.
Improved factorization and serveral refactorings. Factorization mod p = 2 is now implemented.
ComplexRoots
, ComplexRootsAbstract
,
ComplexRootsSturm
.
The implementation provides an exact infallible method which follows the numeric method of Wilf.
It uses Sturm sequences following the Routh-Hurwitz Method
to count the number of complex roots within a rectangle in the complex plane.
MultiplicativeSet
, in sub-classes
polynomials made co-prime in MultiplicativeSetCoPrime
,
polynomials made co-prime and squarefree in MultiplicativeSetSquarefree
and polynomials made irreducible in MultiplicativeSetFactors
.
New distributed hybrid Gröbner base computation with new class
TaggedMesageChannel
to handle multiple message types and partners over
one socket connection. Improved object serialization in distributed hash table.
Adapter updated for JLinAlg version 0.6.
ToRational
used for BigRational
and RealAlgebraicNumber
to remove type case distinctions
in Interval
.
clone()
removed from Element
interface because
of compiler changes.
Squarefree
, abstract class is
SquarefreeAbstract
. Other main classes are
SquarefreeFieldChar0
, SquarefreeFiniteFieldCharP
and SquarefreeInfiniteFieldCharP
.
FactorAlgebraic
can now also handle factorizations over
multiple algebraic extensions like for example Q(i)(sqrt(2)).
Class FactorAbsolute
is now also extended
by FactorAlgebraic
, so that absolute factorizations can
be computed over algebraic extensions of Q. Added new containers for
absolute factorization results and tests for correct absolute
factorizations. More toScript()
methods and
improvements in Jython script interface. Minor additions and fixes.
ZZ, ZM, QQ, DD, CC, Quat, Oct
,
PolyRing, AN, RealN, RF, RC, LC, SolvPolyRing
and RR, PS, Vec, Mat
for the construction of rings and elements.
Added generic Complex
class and ComplexRing
factory.
Fixed a programming bug in factorization code.
factory()
method to Element
interface to have an uniform way to
obtain a corresponing ring object.
Improved RealAlgebraicNumber
so that it can be used as polynomial coefficients,
for example GenPolynomial<RealAlgebricNumber<BigRational>>
.
Real root isolation can now also be used for polynomials with real algebraic coefficients,
for example RealRootsSturm<RealAlgebraicNumber<BigRational>>
.
edu.jas.root
during CISIS/ECDS 2009.
Reached 100.000 lines of Java code.
edu.jas.ring
to edu.jas.gb
and edu.jas.module
to edu.jas.gbmod
.
Factorization
interface with FactorAbstract
class for common codes.
Factorization of univariate polynomials with several coefficient rings:
modulo primes in class FactorModular
,
over integers in class FactorInteger
,
over rational numbers in class FactorRational
and
over algebraic numbers in class FactorAlgebraic<C>
(where C
can be ModInteger
or BigRational
).
Multivatiate polynomials are reduced to the univariate polynomials via Kronecker substitution
and are therefore not very efficient.
Refactorings and fixes.
compareTo
to be transitive as required by
Javas SortedMap
implementations.
Implemented an adaptor package for Apache Log4j to delegate logging calls to native Java logging.
RingElem
and RingFactory
types
in package edu.jas.ps
in classes UnivPowerSeries
and UnivPowerSeriesRing
.
The implementation follows the "Infinite Streams in Java" paper of D. Gruntz
in PPPJ2006.
Condition
and ColoredSystem
classes.
Condition
s consist of an ideal
(with lazy Groebner base computation) for the conditions equal to zero
and a multiplicative set for the conditions not equal to zero.
Non-zero condition polynomials are reduced modulo the ideal of
condition zero polynomials.
The squarefree part from both condition polynomials is used.
It differs from the MAS implementation by Schoenfeld, Pesch and others
by the ideal used to store the zero conditions and some more details.
long
,
int
(now the default, as seems to be the fastest),
short
and byte
.
The respective classes are ExpVectorLong
,
ExpVectorInteger
,
ExpVectorShort
and ExpVectorByte
.
RGroebnerBaseSeq
and RGroebnerBasePseudoSeq
.
Refactorings and fixes.
Product
and RegularRingElem
.
Added fraction free pseudo reduction and Groebner bases
GroebnerBasePseudoSeq
.
Minor refactorings and fixes.
DGroebnerBaseSeq
and EGroebnerBaseSeq
.
Added test methods for reducibility and refactored sequential
Groebner base implementations.
Implemented noncommutative product for solvable polynomials, together with relation tables. SolvableOrderedMapPolynomial extends OrderedMapPolynomial. RatSolvableOrderedMapPolynomial extends SolvableOrderedMapPolynomial. Interface SolvablePolynomial extends Ordered Polynomial. Some more left multiplication methods, left reduction and left Groebner Bases, twosided Groebner Base test and computation. Pairlist class changed to avoid usage of criterion 4 if running for SolvablePolynomials, also criterion4() method itself checks for SolvablePolynomials.
run / n | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
first | 3 | 13 | 32 | 92 | 128 | 188 | 274 | 420 | 683 | 1126 | 1795 | 2793 | 4380 | 6741 |
second | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 8 | 10 | 13 | 16 | 21 | 27 | 35 |
Timings in ms on AMD XP 2800+. Computation of (Y^n) * (X^n) with respect to the relation Y * X = X Y - H. In the first run the relation table is populated with the products Y^i * X^i, i = 2,...,n. In the second run the relations are reused, showing almost no computing time anymore for the products.
run / n | 2 | 3 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|---|
first | 28 | 94 | 303 | 1234 | 5185 | 24647 |
second | 1 | 12 | 107 | 782 | 4569 | 23897 |
Second example shows the computation of ( Xa + Xb + Xc + Ya + Yb + Yc + Ha + Hb )^n in U(sl_3). Since in the relation table only products of two variables are stored, the improvement is minimal (for low n).
# Threads CPUs |
# JVMs | time (sec) | #put | #remove | % total |
---|---|---|---|---|---|
1, seq | 1 | 160.2 | 70 | 327 | 13.5 |
1, par | 1 | 157.0 | 70 | 327 | 13.5 |
2, par | 1 | 82.2 | 72 | 329 | 12.7 |
1, dist | 1 | 177.2 | 77 | 334 | 11.4 |
2, dist | 2 | 92.2 | 90 | 347 | 8.6 |
4, dist | 2 | 56.2 | 112 | 369 | 5.9 |
8, dist | 2 | 58.9 | 255 | 516 | 1.5 |
4, dist | 4 | 51.2 | 117 | 374 | 5.5 |
6, dist | 4 | 43.7 | 129 | 386 | 4.6 |
8, dist | 4 | 62.9 | 259 | 519 | 1.5 |
Timings taken on a 16 CPU Intel Xeon SMP computer running
at 2.7 GHz and with 32 GB RAM.
JVM 1.4.2 started with AggressiveHeap and UseParallelGC.
#JVMs = number of distinct Java virtual machines.
#put = number of polynomials put to pair list.
#remove = number of pairs removed from pair list,
i.e. after application of criterions,
but including nulls up to now.
% total = per cent of removed pairs from total pairs generated,
#remove / ( #put * (#put-1) / 2 ) * 100.
# Threads CPUs |
# JVMs | time (sec) | #put | #remove | % total |
---|---|---|---|---|---|
1, dist | 1 | 24726.2 | 140 | 781 | 8.0 |
2, dist | 2 | 12356.1 | 165 | 806 | 5.9 |
4, dist | 4 | 6859.3 | 218 | 859 | 3.6 |
8, dist | 4 | 7465.1 | 411 | 1054 | 1.2 |
8, dist | 8 | 6412.9 | 344 | 986 | 1.6 |
8, dist | 8 | 7173.3 | 399 | 1041 | 1.3 |
Overhead for distributed variant is about 10% in Katsura 6 (G). Distributed 1 means one distributed process is running for the reduction of S-polynomials. There is always a master process handling polynomial input / output, setup and management of distributed workers and handling of the pair list. Communication between master and workers is always via TCP/IP with object serialization, even if running on one computer.
New classes implementing a distributed hash table to hold the polynomials in distributed GB. Index of polynomials in Pairlist is used as hash key. Communication is now using message types GBTransportMess. Now polynomials are only transported once to each reducer since only polynomial hash indexes are transported. Distributed list is asynchronous and late updated, so some duplicate H-polynomials (head terms) could be (are) produced. Solution by local put to hash table with dummy index? Timings are not dramatically better.
Todo: check reduction algorithm to use later arriving polynomials.
-Xms200M -Xmx400M -XX:+AggressiveHeap -XX:+UseParallelGCMemory must be adjusted with respect to your situation.
Seperated versions with Pair Sequence Respecting Order (PS) and normal versions. PS versions try to keep the order of reduced polynomials added to the ideal base the same as in the sequential version. Normal versions now running OK on parallel computer with the right JVM options. Refactoring with Eclipse (organize imports, static methods).
# Threads CPUs |
Katsura 6 TO(G) load* |
Katsura 6 TO(G) empty* |
Katsura 7 TO(G) load* |
Katsura 7 TO(G) empty* |
---|---|---|---|---|
seq | 184.5 | 153.3 | ||
1 | 181.5 | 4% / 159.7 | 28418.6 | |
2 | 118.6 | s2.02 / p2.11 / 75.6 | p2.06 / 13760.0 | |
4 | 76.8 | s3.79 / p3.95 / 40.4 | 6256.9 | p4.56 / 6225.1 |
8 | 43.2 | s7.19 / p7.49 / 21.3 | 3240.7 | p8.56/ 3318.8 |
10 | 42.5 | |||
12 | 40.5 | 2288.1 | p9.90 / 2868.4 | |
14 | 31.2 | |||
16 | 51.9 | s8.19 / p8.54 / 18.7 | 5376.4 | p12.59 / 2256.1 |
Timings taken on a 16 CPU Intel Xeon SMP computer running
at 2.7 GHz and with 32 GB RAM.
JVM 1.4.2 started with AggressiveHeap and UseParallelGC.
*) timing taken with other load on the CPUs.
+) timing taken with no other load on the CPUs.
Speedup: s = relative to sequential,
p = relative to parallel with one thread / CPU.
Scaling from 8 to 16 CPUs is bad, but also observed on
non CA / GB Examples (Java and C/FORTRAN).
N vars = N+1 |
TermOrder | Seconds | TermOrder | Seconds |
---|---|---|---|---|
7 | G | 32044.204 | L | |
6 | G | 112.641 | L | |
5 | G | 4.195 | L | |
4 | G | 0.431 | L | 11.650 |
3 | G | 0.153 | L | 0.310 |
2 | G | 0.031 | L | 0.032 |
putParallel()
, removeParallel()
and
helper methods. Sequence numbers are generated and reduced polynomials
are only put to the pair list if corresponding pair number is in
correct (sequential) sequence.
The ordered list / queue pairsequence
(TreeMap/SortedMap)
keeps track of the polynomials not yet put to the pairlist.
ExecutableServer
) for remote execution
of objects implementing the RemoteExecutable
interface.
New setup for the distributed computation of GBs: the GB master now sends the client code to some ExecutableSevers based on a maschine file with host and port infos about the distributed environment.
Improved the PolynomialTokenizer so that it can read almost unedited
old MAS GB input files: **
exponents and parenthesis
around polynomials.
Lines starting with #
are treated as comments.
Comments (* *)
and parenthesis within polynomials are
still not supported.
Implemented a common driver for GB computations RunGB
.
Sequential, thread parallel and distributed computation can be selected
by command line parameters. The input is taken from a file. The number
of threads respectively the number of distributed clients can be specified.
For distributed execution the host and port information is taken from
a maschines file.
Usage: RunGB [seq|par|dist|cli] <file> #procs [machinefile]
Added methods putCount()
and remCount()
in OrderedPairlist
to count the number of polynomials
put and get from the pair data structure.
# Threads | 1 seq | 1 par | 2 par | 2 par | 3 par | 4 par | 5 par | 1 dist | 2 dist | 3 dist | 4 dist | 4 dist | 4 dist | 5 dist |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# put | 22 | 22 | 43 | 26 | 28 | 28 | 28 | 22 | 25 | 28 | 37 | 40 | 33 | 27 |
# remove | 25 | 25 | 61 | 30 | 32 | 32 | 41 | 26 | 33 | 42 | 47 | 61 | 54 | 69 |
Timings @ 500 MHz on one CPU and one maschine and log4j level INFO are:
ca. 2.5 - 3.5 seconds for sequential GB,
ca. 2.5 - 6 seconds for parallel GB,
ca. 5.5 - 9 seconds plus 5 seconds sync time for distributed GB.
Network shuffling of polynomials seems to account for 3 seconds in
this example.
Problem uncovered: the distributed version of GB needs an avarage of 5 seconds to sync with all clients (on one maschine). This is way to much for execution times in the range of 2 to 8 seconds.
Redesign of DistributedList, now using TreeMap to keep the list
entries in proper sequence. As key a natural number is used, which is
assigned by the server to successive add() requests.
The server now also holds a copy of the list by itself. So the
retransmission of list elements to late arriving clients is possible.
This doubles the space required to store polynomials, but removes
initial delays to sync all clients to receive all list elements.
By retransmission the DistributedList synchronization delay during
DistributedGB could be removed.
However the problem of about 5 seconds delay in startup
of DistributedGB still remains. It is not visible where and why this
delay occurs.
Further improvements would be the removal of list elements or the
clearing of the list.
Next steps could be distributed HashMap or TreeMap.
An important improvement would be to keep serialized copies of the
list elements (polynomials) at the server and to
avoid many time serialization during broadcast.
mpirun
,
a daemon to run on a distributed system to work with the launcher,
solvable polynomials and non-commutative GBs.
With one thread the time is 30.6 h. Besides the better CPU speed, this makes a 5 % improvement on JDK 1.4 compared to the older timings from a JDK 1.3 and the new polynomial implementation.
Multiplication with ordered polynomials is about 8-10 times faster than the multiplication with unordered polynomials. Also the multiplication with semi-ordered polynomials (LinkedHashMap) with orderpreserving addition is about 7-8 times slower than multiplication with ordered polynomials.
All implementations are based on Map interface and classes. The Map maps exponent vectors (from some monoid) to coefficients (from some domain). This is in sync with the mathematical definition of multivariate polynomials as mappings from some monoid to some domain. Term orders are represented by a TermOrder class which provides the desired Comparator classes for the SortedMap implementation.
However the merging add/subtact implementation is a factor of
2 slower than the TreeMap implementation.
Complexity for a+b is
2*(length(a)+length(b))
for access and merging pre sorted polynomials and
2*length(a)+length(b)+length(b)*log2(length(a+b))
for TreeMap clone, access and insertion.
The merging multiplication implementation is by a factor of
10 slower than the TreeMap implementation.
Polynomial size was ~100 terms and the product contained ~8000 terms.
Complexity for a*b is
lab = length(a)*length(b) coefficient multiplications for both implementations
plus 2*length(a*b)*length(b) for merging summands, respectively
plus length(a)*length(b)*log2(length(a*b)) for TreeMap insertion.
Since for sparse polynomials length(a*b) = lab, the TreeMap complexity
is asymptotically better in this case:
2*length(a)*length(b)*length(b) =>= length(a)*length(b)*log2(length(a*b))
For dense polynomials with length(a*b) ~ length(a)[+length(b)], then
the LinkedHashMap complexity is asymptotically better:
2*length(a)*length(b) =<= length(a)*length(b)*log2(length(a*b))
Parallel computations with the Rose example are at 18 h with 4 threads on 2 Pentium 4 @ 2.4 GHz hyperthreading CPUs. With one thread the time is 40 h.
Does this influence the validity of criterion 3? Access to pairlist is synchronized. Pairs are marked as reduced as soon they are taken from the list. But the algorithm terminates only after all reductions of pairs have terminated. So criterion 3 holds.
New implementation of parallel version of GB. Removal of pairs is now also in parallel. But ordering of pair insertion is no more preserved
Timings are more stable and slightly better than that of sequential GB.
Todo: unit tests, comments, ...
Improved parallel version of GB.
Todo: CVS, comments, polynomial implementation using LinkedList, parallel GB simplify
With the improved algorithm the running time of the parallel GB becomes more stable and not slower than the sequential GB. However there is still no significant speedup.
# Threads | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 16 |
---|---|---|---|---|---|---|---|---|---|
# Reductions | 25 | 25 | 27 | 25 | 25 | 25 | 25 | 25 | 25 |
# Threads | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 16 |
---|---|---|---|---|---|---|---|---|---|
# Reductions | 22 | 24 | 30, 28, 24, 29 | 28 | 29 | 42 | 32 | 32 | 37 |
parallel results:
Trinks 7: mas 0.598 sec, jas 0.918 sec, jas par 0.955 sec
Trinks 6: mas 26.935 sec, jas 3.211 sec, jas par 3.656 sec
mas: including startup and gc time,
jas: excluding startup of jvm and including gc time,
jas par on single processor
timing on P-3@450
implemented edu.jas.arith.BigInteger which implements Coefficient, tested with IntPolynomial which extends Polynomial
todo: alternative Implementations, cleanup RatPolynomial, parallel GB, conversion RatPolynomial <--> IntPolynomial
second results (with new criterion 3 in jas):
Trinks 7: mas 0.598 sec, jas 1.159 sec
Trinks 6: mas 26.935 sec, jas 6.468 sec
mas: including startup and gc time,
jas: excluding startup of jvm and including gc time
third results (with new criterion 3 in jas and GBminimal):
Trinks 7: mas 0.598 sec, jas 0.918 sec
Trinks 6: mas 26.935 sec, jas 3.211 sec
mas: including startup and gc time,
jas: excluding startup of jvm and including gc time
timing on P-3@450
this makes for a factor of 8-9 better, all tests with output to files, startup of JVM is approx. 1.0-1.2 sec, most time is spent in BigInteger:
java -Xrunhprof:cpu=times,format=a CPU TIME (ms) BEGIN (total = 136) Thu Jan 2 18:33:53 2003 rank self accum count trace method 1 15,44% 15,44% 596610 21 java.math.MutableBigInteger.rightShift 2 13,24% 28,68% 582132 15 java.math.MutableBigInteger.difference 3 12,50% 41,18% 612760 19 java.math.MutableBigInteger.getLowestSetBit 4 9,56% 50,74% 2 9 java.lang.Object.wait 5 9,56% 60,29% 5271 22 java.math.MutableBigInteger.binaryGCD 6 6,62% 66,91% 612760 23 java.math.BigInteger.trailingZeroCnt 7 5,88% 72,79% 592152 18 java.math.BigInteger.bitLen 8 5,88% 78,68% 6018 20 java.math.MutableBigInteger.binaryGCD 9 5,15% 83,82% 578887 25 java.math.MutableBigInteger.normalize 10 4,41% 88,24% 550992 24 java.math.MutableBigInteger.primitiveRightShift 11 4,41% 92,65% 1 10 java.lang.Object.wait 12 3,68% 96,32% 582132 12 java.math.MutableBigInteger.compare 13 0,74% 97,06% 35965 13 edu.jas.poly.ExpVector.EVILCP 14 0,74% 97,79% 11612 14 java.math.BigInteger.divide 15 0,74% 98,53% 5866 11 java.math.MutableBigInteger.divide 16 0,74% 99,26% 9032 16 java.math.MutableBigInteger.divide 17 0,74% 100,00% 9032 17 java.math.BigInteger.divide CPU TIME (ms) END
first results (without criterion 3 in jas):
Trinks 7: mas 0.598 sec, jas 1.373 sec
Trinks 6: mas 26.935 sec, jas 30.935 sec
mas: including startup and gc time,
jas: excluding startup of jvm and including gc time.
timing on P-3@450
Implemented and testet ExpVector based on Javas int arrays.
Implemented and testet RatPolynomial based on Javas TreeMap.
static methods: DIRPDF, DIRPWR via toString, DIRPON, DIRPMC,
DIRPPR, DIRPSM, DIRRAS.
Consider replacing keySet/get by entrySet/getval where appropriate.
Can there be an generic Polynomial class?
This will be the end of list processing for Jas.
DIPRNGB needs DIRPDF, DIRPWR, DIRPON, DIRPMC, DIRPPR, DIRPSM.
These need RNDIF, RNDWR, RNINT, RNSIGN, ISRNONE, RNONE, RNZERO,
RNINV (and DIRPRP), RNPROD, RNSUM.
Last modified: Sat Sep 3 12:31:58 CEST 2011