CG models of computation?

Jeff Erickson jeffe at cs.uiuc.edu
Sun Dec 7 04:16:10 PST 2003


Joachim Gudmundsson wrote:
| Recently, I read an excellent paper where the authors add the power
| of the floor function to the standard model of computation. The
| running time of their algorithm is almost linear and beats the lower
| bound in the comparison model.
|
| Somewhat puzzled and skeptical I asked my colleagues about their
| opinion on this topic, only to find out that this model was,
| according to my colleagues, unofficially accepted. 
|
| My question is, what is "unofficially" allowed? One could claim
| that the use of floor function should be accepted since it is used
| in practice and very fast. How about bit manipulation? But then we
| should measure the complexity in the number of input bits, right?

This is actually a pretty big can of worms!

The most common model of geometric computation seems to be the
unit-cost real RAM, but it is fairly common to add the floor function
to permit bucketing or hashing, in many cases beating lower bounds in
the algebraic decision tree model.  The two classical examples are
Gonzalez's algorithm for MAX GAP [g-asrp-75] and Rabin's randomized
algorithm for closest pairs [r-pa-76].

However, despite its general acceptance, the unit-cost real RAM with
the floor function is NOT a reasonable model of computation, because
it allows any problem in PSPACE or #P to be solved in polynomial time!

In 1979, Sch\"onhage [s-pram-79] described an algorithm to solve the
PSPACE-complete problem QBF---deciding if a given arbitrarily
quantified boolean formula is true or false---using a polynomial
number of integer arithmetic operations: z=x+y, z=x-y, z=xy, and
z=floor(x/y).  The trick is to encode the entire formula into a single
integer and then use arithmetic to process different parts of the
encoded forumla in parallel.  His algorithm just removes the
quantifiers, by replacing each ExF(x) with F(0)vF(1) and each AxF(x)
with F(0)^F(1), and then simplifies the resulting quantifier-free
formula to either 0 or 1.  Hartmanis and Simon [hs-pmram-74] did the
same thing in 1974, only using bit-wise Boolean operations instead of
integer division.  A few years later, Bertoni et al. [bms-scram-85]
generalized the same approach to the #P-complete problem #SAT: How
many satisfying assignments does this boolean formula have?  Peter van
Emde Boas has a great discussion of "the unreasonable power of integer
multiplication" in his survey of models of computation [e-mms-90].

Partly as a result of the Hartmanis-Simon result, there are now two
essentially standard integer RAM models:

(1) Logarithmic-cost (or "bit-level") RAM: Each memory location can
    store an arbitrary integer.  The cost of each arithmetic operation
    is proportional to the total number of BITS involved.

(2) Word-level RAM: Each memory location can store a single word
    consisting of Theta(log n) bits.  Arithmetic and boolean
    operations on words take constant time, presumably because of
    hardware parallelism.  Arithmetic on larger integers must be
    simulated.

Complexity theorists prefer the bit-level RAM, but it's rarely used by
anyone else.  Almost all integer-RAM algorithms are implicitly
developed and analyzed on the word-level RAM.  Maybe it would be more
accurate to say that most algorithms are analyzed on the unrestricted
unit-cost integer RAM, but since the integers they create have only
O(log n) bits, they might as well be using the word-level RAM.

Essentially the same ideas as the Hartmanis-Simon-Sch\"onhage QBF
algorithm lead to "unreasonably efficient" algorithms and data
structures on the word-level RAM, starting with Fredman and Willard's
fusion trees [fw-sitbf-93].


Anyway... since a unit-cost integer RAM can be trivially simulated
using a unit-cost real RAM with the floor function, we can solve QBF
or #SAT in polynomial time in that model as well.

The most obvious way to avoid this mess is simply to never combine
exact real arithmetic with the floor function, but it's too late for
that; the bits are already out of the bag.  A more reasonable approach
might be to allow the use of the floor function, but only if the
resulting integers have O(log n) bits.  Most realRAM+floor algorithms
(that I know about) obey this restriction.  Allowing an operation that
computes the O(log n) most significant bits of a real number might
also be reasonable.  Alternately, if you prefer the logarithmic-cost
model, you could let the cost of the floor operation be the number of
bits of the result.

Even without the floor function, the real RAM is not necessarily a
"reasonable" model of computation.  For example, despite the existence
of linear-time real RAM algorithms to compute minimum-link paths,
Kahan and Snoeyink [ks-bcmlp-96] constructed examples of polygons with
O(log n)-bit integer coordinates, such that some minimum link paths
require Omega(n^2 log n) bits of precision.  Real-RAM algorithms that
compare sums of distances are also questionable, since it is open
whether sums of square roots of integers can be compared in polynomial
time on an integer RAM.  (But see [b-csrpt-91]!)

		   -- Jeff


(Any references not listed here are in geom.bib.)

@inproceedings{hs-pmram-74
, author =	"Juris Hartmanis and Janos Simon"
, title =	"On the power of multiplciation in random-access machines"
, booktitle =	"Proc. 15th Annu. IEEE Sympos. Switching Automata Theory"
, year =	1974
, pages =	"13--23"
}

@inproceedings{s-pram-79
, author =	"Arnold Sch{\"o}nhage"
, title =	"On the power of random access machines"
, booktitle =	"Proc. 6th Internat. Colloq. Automata Lang. Program."
, series =	"Lecture Notes Comput. Sci."
, volume =	71
, publisher =	"Springer-Verlag"
, year =	1979
, pages =	"520--529"
}

@article{bms-scram-85
, author =	"A. Bertoni and G. Mauri and N. Sabadini"
, title =	"Simulations among classes of random access machines and equivalence among numbers succinctly represented"
, journal =	"Ann. Discrete Math."
, volume =	25
, year =	1985
, pages =	"65--90"
}

@incollection{e-mms-90
, author =      "Peter van {Emde Boas}"
, title =       "Machine models and simulation"
, editor =      "Jan van Leeuwen"
, booktitle =   "Handbook of Theoretical Computer Science"
, volume =      "A"
, publisher =   "Elsevier"
, address =     "Amsterdam"
, year =        1990
, pages =       "1--66"
}


-------------
The compgeom mailing lists: see
http://netlib.bell-labs.com/netlib/compgeom/readme.html
or send mail to compgeom-request at research.bell-labs.com with the line:
send readme
Now archived at http://www.uiuc.edu/~sariel/CG/compgeom/maillist.html.



More information about the Compgeom-announce mailing list