Hacker News new | past | comments | ask | show | jobs | submit login
Unconventional Sorting Algorithms (codingkaiser.blog)
89 points by devev on Oct 21, 2021 | hide | past | favorite | 54 comments



My favorite is still slowsort. https://en.wikipedia.org/wiki/Slowsort

> It is a reluctant algorithm based on the principle of multiply and surrender. It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.

https://www.mipmip.org/tidbits/pasa.pdf

> The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.



SleepSort is a subject of the posted article.


I like Intelligent Design sort:

> The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.

https://www.dangermouse.net/esoteric/intelligentdesignsort.h...


Ohhhhhh... fascinating! Is there a good metric for how well sorted a list is? Maybe for every pair of elements, how many are in the correct order?

Given a random shuffle of a list, what are the chances that another shuffle would be more sorted? There's a life lesson in here, I know it...



Quantum bogosort is probably my personal favorite. Randomly permute the input using a quantum source of entropy. If the input is not sorted, destroy the universe. The only universe remaining is one where the input was already sorted. Runs in O(n) time.


Even without destroying the universe you can sort your list with a quantum source of entropy, just use:

    def cosmicraysort(list):
        while not sorted(list):
            pass


You can just kill the observer. Quantum immortality sort:

     def qisort(list):
        qshuffle(list)
        if not sorted(list):
            kill_observer()
Only in the universe in which th the list is sorted, the user will survive. There might be no such universe, but then the user is no longer waiting on the result.


(the problem of destroying the universe left as an exercise to the reader)


For the operation of this algorithm, is there any meaningful difference between destroying the universe and destroying the observer(s)? https://en.wikipedia.org/wiki/Quantum_suicide_and_immortalit...

EDIT: If you allow for the objects to spontaneously sort themselves, the runtime is O(1).


Surely O(n) for checking whether it is sorted?


You could just destroy all universes to ensure it's sorted in the remaining reality.


I'm confused. What reality is remaining if all universes are destroyed?


None, but in it the list is sorted.

Or rather a reality wherein it wasn't sorted doesn't exist anymore.


True although not very helpful if you actually want to do something with the sorted list. But I suppose if you cease to exist, so does whatever problem sorting the list was intended to solve.


(the problem of destroying which universe is also left as an exercise to the reader)


Unfortunately, randomly permuting the input takes O(nlog(n)) steps (you need at least this much time just to read in sufficiently many random bits). Perhaps a clever parallel architecture could reduce the runtime?


This is not true if you can generate a random number in constant time which is probably the more practical of the constraints to satisfy when implementing this algorithm.


My contribution: Power Sort.

Start with p=0 and add 2^n for each n. Then subtract the largest power of 2 successively to get your integers ordered.

Con: p will be very big.

Pro: you don't need ifs!


Fun fact, when you consider the bits required to contain p, this is equivalent to bucket sort with a bucket size of 1.


Ooh, that's a fun one.

Another con: you'd better hope your input contains no duplicates. :)


Duplicates could also be handled by using (L+1)^n, rather than 2^n, where L is the length of the input: even if all elements are identical one will end up with a unique value p = L * (L+1)^n.

For an efficient implementation, one might want to round L+1 up to the nearest power of 2 to get crucial micro-optimisations based on instructions for bit scanning.

(I think this ends up being a very complicated phrasing of a counting sort.)


Easily solvable with a hash.

I have thought this through (I'm ashamed to admit).


Bogosort is always a fun one, because I think its one of the better introductions to NP-complete problems.

If you imagine that there's no "easy" algorithm to return a sorted list (but sorted lists do exist), then the only methodology possible is to try all combinations. And you do this either systematically, or randomly.

Random NP-complete algorithms, such as WalkSAT, are quite good in practice. Systematic NP-complete algorithms, such as DPLL-SAT, are more "obvious" in how they work, and are sufficient for smaller sizes.


  def everett_sort(list):
    shuffle(list)
    if is_sorted(list):
      return
    else:
      suicide()


I recently found a fun linear-time systolic sorting algorithm. I'd describe the serial implementation as unconventionally obvious. It's like insertion sort, without all the bother of swapping.

  def index_sort(a):
    n = len(a)
    output = [None] * n
    for i in range(n): #can run in parallel, damn the GIL
      index = 0
      for j in range(n):
        if a[j] < a[i]:
          index += 1
        elif a[j] == a[i] and j < i:
          index += 1
      output[index] = a[i]
    return output


> linear-time

But... there are two nested `range(n)` for loops. Typo?


That's the serial implementation, which is clearly O(n^2). The systolic algorithm runs in linear time on specialty hardware (similar to a tensor core, which can do comparisons in a square matrix, and can perform row-sums). Or, if we ignore communication overheads, it can run in O(n) time with O(n) parallel workers.


    import ctypes
    
    def mutation_sort(a):
        for i, v in enumerate(a):
            ctypes.c_int.from_address(id(v) + 24).value = i # for x86-64, adjust the offset for your platform
        return a
    
    def is_sorted(a):
        prev = None
        for v in a:
            if prev is not None and prev > v: return False
            prev = v
        return True
    
    is_sorted(mutation_sort([3000, 1000, 7000, 8000, 2000])) # prints True


Usage sort: About sorting a sequence of numbered books while you are picking books out of the sequence when you need them and insert them in afterwards (moving the books in between). What is the best strategy for where to put the books back? for a discussion see: https://www.iwriteiam.nl/Dpuzzle.html#usagesort


Here is my offering

https://gist.github.com/ncw/5419af0e255d2fb62b98

It's a Go channel based quicksort. It sorts a channel of ints using O(n) go routines! Interestingly it doesn't need to know how many ints are in the channel at the start.

Not a practical sort method but fun to see the Go concurrency primitives in use.


I like BTP-Sort. It’s O(n).

Really easy: you see it, then you say it, then it’s sorted.

https://www.btp.police.uk/police-forces/british-transport-po... (this is a famous national campaign on the British railways over the past few years)


I know this article is meant as a joke but the first variant of bogosort, which is actually randomsort, it will be the fastest algorithm once we start using quantum computing.

Having an array and allocating "n"=="array length" qubits for it then implementing parallel version of randomsort (no while, while is serialization!) will be the fastest sorting algorithm.


What about this one? This one was mentioned a couple of weeks back and is really funny.

https://news.ycombinator.com/item?id=28758106



Slightly off-topic, the article was fine except for one nitbit...

I wonder what the author, "hilariously" making one sorting function print "<val> sent to gulag" and calling it "stalin-sort", would think of "hitler-sort" which prints "<val> burned in oven"?

Please don't normalize mass-murderers, even for memes.


If you downvote, you must write what is wrong with that statement.


It's an obvious joke, the point of which is that the Stalin Sort is destructive and doesn't actually get you what you want, and you'd never want to use it.


I think OP gets the joke and is just saying it is in poor taste. Is it particularly different than their example of Hitler-sort burning stuff in an oven on failure? Or Columbine-sort, where the 15 coolest numbers get shot and then the rest of the numbers can get sorted.


But if the point is to mock the concept, how would it be considered normalizing anything?


I disagree that the point was to mock stalinism. It used Stalinism to joke about this type of sorting, but that doesn't mean it was mocking Stalinism.


Isn't 'sleep sort' linear with respect to array size?

O(n) sort achieved?


It is if you ignore how the underlying OS implements multitasking and sleep() in particular. And the kernel-side implementation usually involves some kind of priority queue of sleeping threads. So taken as a whole sleep-sort is just an convoluted implementation of heap-sort that outsources the actual sorting to OS.


The kernel could use a different implementation of multitasking, e.g. keep the threads in a circular doubly linked list instead of a priority queue. This wouldn't be optimal for a general-purpose OS, but works for our custom O(n) sorting machine.


Then the resulting complexity would be O(n^2). With some kind of heap-like priority queue as used in typical general purpose OS you get O(n*log(n)) or something close to that.


I really like sleep sort because it trades (fundamentally) cycles and memory for time.

In theory you could do this in hardware. Hardware timers are pretty trivial, you could imagine a hardware chip that accepts a number, then writes the value to the next free slot in an array when the time has elapsed. As timers go off, the array fills up. Obviously you need to handle collisions and whatnot but nothing in this screams that you need to go O(n^2).


You need n separate programmable timers, though. If you only have a constant number of timers, then you’re back to implementing something approaching a priority queue to schedule the events (possibly in hardware if you want)


It's perversely linear with respect to the largest element in the array, if that's greater than the runtime of the underlying heapsort. Good luck with negative entries.

Response to dead comment:

> Couldn't this be solved by scaling everything by the largest entry. Of course that requires first a pass to find the largest entry but that's not as expensive?

One presumes that we don't have infinite precision timers, so you should only scale it down so the smallest gap between elements is a few clock cycles (unless you're satisfied with breaking the fiction that sleepsort is not heapsort). This seems like a mildly interesting question. My intuition is that finding the smallest abs(a[i]-a[j]) with i != j should be just as hard as sorting the list on a classical computer, but it seems like a quantum computer might find the smallest gap faster.


Australia sorting algorithm:

Stay at your index!!!


for i = 1 to n do

for j = 1 to n do

if A[i] < A[j] then

swap A[i] and A[j


fastest swap implementation, assembler:

PUSH A[j]; PUSH A[i]; POP A[j]; POP A[i];


[flagged]


Not related to sorting algorithms.


This has been posted in multiple threads today




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: