> It is a reluctant algorithm based on the principle of multiply and surrender. It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
> The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
> The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.
Quantum bogosort is probably my personal favorite. Randomly permute the input using a quantum source of entropy. If the input is not sorted, destroy the universe. The only universe remaining is one where the input was already sorted. Runs in O(n) time.
You can just kill the observer. Quantum immortality sort:
def qisort(list):
qshuffle(list)
if not sorted(list):
kill_observer()
Only in the universe in which th the list is sorted, the user will survive. There might be no such universe, but then the user is no longer waiting on the result.
True although not very helpful if you actually want to do something with the sorted list. But I suppose if you cease to exist, so does whatever problem sorting the list was intended to solve.
Unfortunately, randomly permuting the input takes O(nlog(n)) steps (you need at least this much time just to read in sufficiently many random bits). Perhaps a clever parallel architecture could reduce the runtime?
This is not true if you can generate a random number in constant time which is probably the more practical of the constraints to satisfy when implementing this algorithm.
Duplicates could also be handled by using (L+1)^n, rather than 2^n, where L is the length of the input: even if all elements are identical one will end up with a unique value p = L * (L+1)^n.
For an efficient implementation, one might want to round L+1 up to the nearest power of 2 to get crucial micro-optimisations based on instructions for bit scanning.
(I think this ends up being a very complicated phrasing of a counting sort.)
Bogosort is always a fun one, because I think its one of the better introductions to NP-complete problems.
If you imagine that there's no "easy" algorithm to return a sorted list (but sorted lists do exist), then the only methodology possible is to try all combinations. And you do this either systematically, or randomly.
Random NP-complete algorithms, such as WalkSAT, are quite good in practice. Systematic NP-complete algorithms, such as DPLL-SAT, are more "obvious" in how they work, and are sufficient for smaller sizes.
I recently found a fun linear-time systolic sorting algorithm. I'd describe the serial implementation as unconventionally obvious. It's like insertion sort, without all the bother of swapping.
def index_sort(a):
n = len(a)
output = [None] * n
for i in range(n): #can run in parallel, damn the GIL
index = 0
for j in range(n):
if a[j] < a[i]:
index += 1
elif a[j] == a[i] and j < i:
index += 1
output[index] = a[i]
return output
That's the serial implementation, which is clearly O(n^2). The systolic algorithm runs in linear time on specialty hardware (similar to a tensor core, which can do comparisons in a square matrix, and can perform row-sums). Or, if we ignore communication overheads, it can run in O(n) time with O(n) parallel workers.
import ctypes
def mutation_sort(a):
for i, v in enumerate(a):
ctypes.c_int.from_address(id(v) + 24).value = i # for x86-64, adjust the offset for your platform
return a
def is_sorted(a):
prev = None
for v in a:
if prev is not None and prev > v: return False
prev = v
return True
is_sorted(mutation_sort([3000, 1000, 7000, 8000, 2000])) # prints True
Usage sort: About sorting a sequence of numbered books while you are picking books out of the sequence when you need them and insert them in afterwards (moving the books in between). What is the best strategy for where to put the books back? for a discussion see: https://www.iwriteiam.nl/Dpuzzle.html#usagesort
It's a Go channel based quicksort. It sorts a channel of ints using O(n) go routines! Interestingly it doesn't need to know how many ints are in the channel at the start.
Not a practical sort method but fun to see the Go concurrency primitives in use.
I know this article is meant as a joke but the first variant of bogosort, which is actually randomsort, it will be the fastest algorithm once we start using quantum computing.
Having an array and allocating "n"=="array length" qubits for it then implementing parallel version of randomsort (no while, while is serialization!) will be the fastest sorting algorithm.
Slightly off-topic, the article was fine except for one nitbit...
I wonder what the author, "hilariously" making one sorting function print "<val> sent to gulag" and calling it "stalin-sort", would think of "hitler-sort" which prints "<val> burned in oven"?
Please don't normalize mass-murderers, even for memes.
It's an obvious joke, the point of which is that the Stalin Sort is destructive and doesn't actually get you what you want, and you'd never want to use it.
I think OP gets the joke and is just saying it is in poor taste. Is it particularly different than their example of Hitler-sort burning stuff in an oven on failure? Or Columbine-sort, where the 15 coolest numbers get shot and then the rest of the numbers can get sorted.
It is if you ignore how the underlying OS implements multitasking and sleep() in particular. And the kernel-side implementation usually involves some kind of priority queue of sleeping threads. So taken as a whole sleep-sort is just an convoluted implementation of heap-sort that outsources the actual sorting to OS.
The kernel could use a different implementation of multitasking, e.g. keep the threads in a circular doubly linked list instead of a priority queue. This wouldn't be optimal for a general-purpose OS, but works for our custom O(n) sorting machine.
Then the resulting complexity would be O(n^2). With some kind of heap-like priority queue as used in typical general purpose OS you get O(n*log(n)) or something close to that.
I really like sleep sort because it trades (fundamentally) cycles and memory for time.
In theory you could do this in hardware. Hardware timers are pretty trivial, you could imagine a hardware chip that accepts a number, then writes the value to the next free slot in an array when the time has elapsed. As timers go off, the array fills up. Obviously you need to handle collisions and whatnot but nothing in this screams that you need to go O(n^2).
You need n separate programmable timers, though. If you only have a constant number of timers, then you’re back to implementing something approaching a priority queue to schedule the events (possibly in hardware if you want)
It's perversely linear with respect to the largest element in the array, if that's greater than the runtime of the underlying heapsort. Good luck with negative entries.
Response to dead comment:
> Couldn't this be solved by scaling everything by the largest entry. Of course that requires first a pass to find the largest entry but that's not as expensive?
One presumes that we don't have infinite precision timers, so you should only scale it down so the smallest gap between elements is a few clock cycles (unless you're satisfied with breaking the fiction that sleepsort is not heapsort). This seems like a mildly interesting question. My intuition is that finding the smallest abs(a[i]-a[j]) with i != j should be just as hard as sorting the list on a classical computer, but it seems like a quantum computer might find the smallest gap faster.
> It is a reluctant algorithm based on the principle of multiply and surrender. It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
https://www.mipmip.org/tidbits/pasa.pdf
> The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.