CG models of computation?
Jeff Erickson
jeffe at cs.uiuc.edu
Sun Dec 7 04:16:10 PST 2003
Joachim Gudmundsson wrote:
 Recently, I read an excellent paper where the authors add the power
 of the floor function to the standard model of computation. The
 running time of their algorithm is almost linear and beats the lower
 bound in the comparison model.

 Somewhat puzzled and skeptical I asked my colleagues about their
 opinion on this topic, only to find out that this model was,
 according to my colleagues, unofficially accepted.

 My question is, what is "unofficially" allowed? One could claim
 that the use of floor function should be accepted since it is used
 in practice and very fast. How about bit manipulation? But then we
 should measure the complexity in the number of input bits, right?
This is actually a pretty big can of worms!
The most common model of geometric computation seems to be the
unitcost real RAM, but it is fairly common to add the floor function
to permit bucketing or hashing, in many cases beating lower bounds in
the algebraic decision tree model. The two classical examples are
Gonzalez's algorithm for MAX GAP [gasrp75] and Rabin's randomized
algorithm for closest pairs [rpa76].
However, despite its general acceptance, the unitcost real RAM with
the floor function is NOT a reasonable model of computation, because
it allows any problem in PSPACE or #P to be solved in polynomial time!
In 1979, Sch\"onhage [spram79] described an algorithm to solve the
PSPACEcomplete problem QBFdeciding if a given arbitrarily
quantified boolean formula is true or falseusing a polynomial
number of integer arithmetic operations: z=x+y, z=xy, z=xy, and
z=floor(x/y). The trick is to encode the entire formula into a single
integer and then use arithmetic to process different parts of the
encoded forumla in parallel. His algorithm just removes the
quantifiers, by replacing each ExF(x) with F(0)vF(1) and each AxF(x)
with F(0)^F(1), and then simplifies the resulting quantifierfree
formula to either 0 or 1. Hartmanis and Simon [hspmram74] did the
same thing in 1974, only using bitwise Boolean operations instead of
integer division. A few years later, Bertoni et al. [bmsscram85]
generalized the same approach to the #Pcomplete problem #SAT: How
many satisfying assignments does this boolean formula have? Peter van
Emde Boas has a great discussion of "the unreasonable power of integer
multiplication" in his survey of models of computation [emms90].
Partly as a result of the HartmanisSimon result, there are now two
essentially standard integer RAM models:
(1) Logarithmiccost (or "bitlevel") RAM: Each memory location can
store an arbitrary integer. The cost of each arithmetic operation
is proportional to the total number of BITS involved.
(2) Wordlevel RAM: Each memory location can store a single word
consisting of Theta(log n) bits. Arithmetic and boolean
operations on words take constant time, presumably because of
hardware parallelism. Arithmetic on larger integers must be
simulated.
Complexity theorists prefer the bitlevel RAM, but it's rarely used by
anyone else. Almost all integerRAM algorithms are implicitly
developed and analyzed on the wordlevel RAM. Maybe it would be more
accurate to say that most algorithms are analyzed on the unrestricted
unitcost integer RAM, but since the integers they create have only
O(log n) bits, they might as well be using the wordlevel RAM.
Essentially the same ideas as the HartmanisSimonSch\"onhage QBF
algorithm lead to "unreasonably efficient" algorithms and data
structures on the wordlevel RAM, starting with Fredman and Willard's
fusion trees [fwsitbf93].
Anyway... since a unitcost integer RAM can be trivially simulated
using a unitcost real RAM with the floor function, we can solve QBF
or #SAT in polynomial time in that model as well.
The most obvious way to avoid this mess is simply to never combine
exact real arithmetic with the floor function, but it's too late for
that; the bits are already out of the bag. A more reasonable approach
might be to allow the use of the floor function, but only if the
resulting integers have O(log n) bits. Most realRAM+floor algorithms
(that I know about) obey this restriction. Allowing an operation that
computes the O(log n) most significant bits of a real number might
also be reasonable. Alternately, if you prefer the logarithmiccost
model, you could let the cost of the floor operation be the number of
bits of the result.
Even without the floor function, the real RAM is not necessarily a
"reasonable" model of computation. For example, despite the existence
of lineartime real RAM algorithms to compute minimumlink paths,
Kahan and Snoeyink [ksbcmlp96] constructed examples of polygons with
O(log n)bit integer coordinates, such that some minimum link paths
require Omega(n^2 log n) bits of precision. RealRAM algorithms that
compare sums of distances are also questionable, since it is open
whether sums of square roots of integers can be compared in polynomial
time on an integer RAM. (But see [bcsrpt91]!)
 Jeff
(Any references not listed here are in geom.bib.)
@inproceedings{hspmram74
, author = "Juris Hartmanis and Janos Simon"
, title = "On the power of multiplciation in randomaccess machines"
, booktitle = "Proc. 15th Annu. IEEE Sympos. Switching Automata Theory"
, year = 1974
, pages = "1323"
}
@inproceedings{spram79
, author = "Arnold Sch{\"o}nhage"
, title = "On the power of random access machines"
, booktitle = "Proc. 6th Internat. Colloq. Automata Lang. Program."
, series = "Lecture Notes Comput. Sci."
, volume = 71
, publisher = "SpringerVerlag"
, year = 1979
, pages = "520529"
}
@article{bmsscram85
, author = "A. Bertoni and G. Mauri and N. Sabadini"
, title = "Simulations among classes of random access machines and equivalence among numbers succinctly represented"
, journal = "Ann. Discrete Math."
, volume = 25
, year = 1985
, pages = "6590"
}
@incollection{emms90
, author = "Peter van {Emde Boas}"
, title = "Machine models and simulation"
, editor = "Jan van Leeuwen"
, booktitle = "Handbook of Theoretical Computer Science"
, volume = "A"
, publisher = "Elsevier"
, address = "Amsterdam"
, year = 1990
, pages = "166"
}

The compgeom mailing lists: see
http://netlib.belllabs.com/netlib/compgeom/readme.html
or send mail to compgeomrequest at research.belllabs.com with the line:
send readme
Now archived at http://www.uiuc.edu/~sariel/CG/compgeom/maillist.html.
More information about the Compgeomannounce
mailing list