Hacker News new | past | comments | ask | show | jobs | submit login
Sandstorm Oasis Is Shutting Down (sandstorm.io)
225 points by kawera on Sept 15, 2019 | hide | past | favorite | 97 comments



Sandstorm is one of the coolest pieces of tech I've come across in the last few years. Not just on a philosophical self-hosting level, but also a technical level (capnproto is awesome). I'm working on a product right now that is heavily inspired by Sandstorm.

In my opinion the one thing that makes it hard for sandstorm to grow is the requirement that apps have to specifically be modified to work on the platform, and it never reached critical mass of developers building in sandstorm support to their apps. So it's essentially a chicken-and-egg problem. I'm not sure what the solution to that is.

I would love to read a pseudo-postmortem (I refuse to believe this heralds the death of sandstorm) by @kentonv about what he thinks the main challenges of getting to a truly decentralized world are, how sandstorm plays in, and where we go from here.


> In my opinion the one thing that makes it hard for sandstorm to grow is the requirement that apps have to specifically be modified to work on the platform

This is true, though of course without that requirement, Sandstorm wouldn't be able to do what it does.

More broadly, I think the challenge we have today is that we've settled into a local maximum way of doing things that we call "Cloud Architecture" and "Software as a Service". The industry has spent many billions of dollars exploring and optimizing this approach. Even open source developers are designing their little apps using hyper-scalable techniques because they think that's what they're "supposed" to do -- never mind that such architecture is actively hostile towards small-scale self-hosting.

I think the Sandstorm approach -- which is essentially "distribute apps to people's personal servers the same way you distribute apps to phones today" -- would be vastly preferable (both to users and developers), if it had the same level of development and investment.

We can't just invest $100B upfront to build the new world, so instead we need to find an incremental strategy to get there. That's the hard part. Sandstorm tried to harness the investment already being made by "indie" / open source developers. We got a long way with not very much money! But the indie motif didn't exactly give us enterprise credibility, or any other way to sustain the company.

My new hope is that "mainstream" cloud infrastructure will push towards being incrementally more and more decentralized on a technical level, because of the technical advantages that brings... If your software is designed to run as a million little servers rather than one huge one, then a company like Cloudflare (my current employer) can go deploy it to literally hundreds of locations around the world for huge savings in latency and long-haul bandwidth, better reliability, etc. Then once apps are designed that way, maybe, just maybe, it'll be easier to flip control of code execution and data away from the vendor towards the consumer? But obviously there's a long way to go to get there.


There are a bunch of us out there that still believe in the vision. You were just too early. This needs some sort of WRT54G kinda moment, synology or ???, you needed to have a last mile partner or a host (vpn in to services). I thought containers were going to take over cloud in a pervasive way and it hasn't happened yet.

I can't say thank you enough for what you have done. None of this was a failure.


We'll get there. I don't believe for a minute that kentonv doesn't still plan on being a part of that future.

> This needs some sort of WRT54G kinda moment, synology or ???, you needed to have a last mile partner or a host (vpn in to services)

While I love the idea of everyone running their own hardware, I've been thinking recently that physical separateness might be the least important aspect of decentralization. I think alignment of incentives is more important. What I mean is that having your apps/data on google's servers isn't necessarily a problem, but being locked into their platform and unable to easily change is. When it's simple and convenient for customers to change providers, companies are incentivized to maintain high quality products and features such as privacy guarantees.


I agree, but I add one thing that I've seen a lot of decentralization efforts ignore: If I decide to move servers, I need to be able to take with me not just my data, but my software (apps). Otherwise, self-hosting tends to mean accepting second-rate software, and choosing the best software means also choosing the servers provided by the same company. These should be separate decisions.


Im creating something that take this into account; you should have a embeddable/movable entity where you keep your data, api´s and apps. And i tell you its a lot of work..

Was following Sandstorm closely, and despite the fact that im going with different choices compared with Sandstorm, your project was one that i instantly recognized to be tackling in the same domain.

I mean, i think your meditation and problem solving are in the right place, we just need a couple more iterations , research and efforts to get there.

I will try to give it a shot too, been working on this on a couple of years over the chromium codebase, with a design that was not quite right in the beggining, that needed couple more of iteration and tought, but that in the current incarnation, i think i might have something to show.

I hope i can count with your comments and criticism when i launch it, which i believe, its only a couple of months from now, as i have a great respect for your work and your vision.

You helped to move the needle forward in a domain we need desperately of a solution or else all of our data and freedoms are at stake (not to mention it can even be a better technology than the current cloud-corporate centric vision that are designed to maximize corporations profit and data centralization, where we dont actually own anything, anymore).

So thanks for all of your work, it was really inspiring.


Looking forward to seeing your project!


Me too!


I need to be able to take with me not just my data, but my software (apps)

Can you give an example of what you're talking about here?


I mean that if I think Google Sheets is the best online spreadsheet software, but I don't want to store my data on Google's servers, I should be able to run Google Sheets on some other servers. I shouldn't be forced to use some other, worse software if I choose different physical hosting. These should be independent decisions.

SaaS providers basically have vertical monopolies here and I think that hurts consumers.


What would you think of a model where instead of having the data and apps coupled at all, they're handled as separate problems? So for example you could pay one provider (or self host) to store your data, then use standard protocols to access the data (like HTTP for simple things, and maybe something more advanced and akin to Firebase for more complicated apps). So it doesn't matter where the apps are hosted. Each app can even be served from a different domain. You just point them at your "hard-drive on the web".


Sorry, but that's exactly the model that I think doesn't work at all.

- For reasonable performance and reliability, code needs to run close to data. You can't be accessing all data remotely over the internet.

- Requirements to use "standard" data formats prevent apps from innovating new features not covered by the standards.

- Different applications call for different kinds of databases, and there are still lots of new ideas to explore in database technology. It would suck if all apps were forced to use the lowest-common-denominator database protocol supported by user storage.


I disagree, but maybe we're talking about different kinds of computation.

- There are tons of classes of data that work rather well with streaming modes of computation, especially video, music, and images, which make up most of the average user's data. It can even work for bigger data. I recently had my whole genome sequenced. They're sending me the data on a 500GB hard drive, which I'll upload to the cloud and use tools like iobio[0] (which I work on) to analyze it remotely. Even in cases where you can't use sampling, often the actual computation is as much of a bottleneck as the network throughput is.

- I'm thinking more along the lines of a true hard drive analog, ie the apps can store whatever arbitrary data they want, including binary blogs and special config formats. The service provides a simple API for storing/retrieving files, notifying on updates, and handling auth/permissions.

- Again there are a huge class of problems that work just fine with flat filesystems. I think databases are heavily overused, especially in cases where only 1-100 people need to use the instance. You could run into issues with large collections of music or photos, but I don't know anyone who keeps more than a few hundred photos in any one directory.

[0] http://iobio.io/


Honestly, video, music, and images are the least-interesting use case, because they are large blobs of static data. Yeah you can put those in a flat filesystem just fine. We even already have open standards for this and a healthy ecosystem of file hosting providers and file-consuming applications that integrate with them.

However, the flat filesystem model would be awful for most productivity apps. Think GMail, Google Docs, Calendar, Slack, Trello, GitHub, Jira, etc. These are the use cases I care about.

Video, music, and images might be the bulk of most users data by raw volume but certainly not by frequency of access or importance.


> We even already have open standards for this and a healthy ecosystem of file hosting providers and file-consuming applications that integrate with them.

I'm not aware of any providers that offer the ease of google drive and the access of S3 (ie can I host a website on it). As far as I know both GDrive and Dropbox removed the ability to host files on the web in the last couple years. Do you know of any comparable products that offer both of those? I'd be very interested in looking at them. I think S3 is the closest, because you can access the filesystem through their browser APIs, and publicly over HTTP. And it got popular enough for people to make open server implementations. The problem with S3 at the moment are that it wasn't designed for this use case, so it's missing features like Firebase-style update events.

> However, the flat filesystem model would be awful for most productivity apps. Think GMail, Google Docs, Calendar, Slack, Trello, GitHub, Jira, etc. These are the use cases I care about.

GMail, for sure. GDocs, I can't think of why you couldn't implement it for a reasonable number of users with nothing but texts files using atomic updates, or maybe come up with a standard protocol for invoking text-based CRDTs. Or just send the change updates as git diffs and if someone tries to change a line that would result in a conflict reject that change and send back the current state of the document. This would be sufficient for any app for the which the data can be expressed as text files. Worst case you could even allow something like CF Workers for cases where you absolutely need centralized logic, but I think there are a large number of tasks you can accomplish without that. Even something like slack I think you could implement with text files on a disk. Pre-SSDs I'd have to agree with you, but nowadays I think a generic filesystem is good enough for most things that don't involve scale.

> Video, music, and images might be the bulk of most users data by raw volume but certainly not by frequency of access or importance.

True


I see. Thanks for the explanation.

Can you talk about how this informs your work on Cloudflare Workers? Should I be able to take my workers and run them elsewhere, and on software that's not second-rate, as you put it? Should all the proprietary code behind Cloudflare Workers be something I can pick up and take to another host?


I think it would be pretty cool if apps written on Workers could run on whatever servers the end user of that app chooses, rather than what the app developer chooses.

I'm not entirely sure yet how that would work but it's something I like to think about.


Not the parent, but i understand this as the paradigm that containers and VM´s use for instance.

Your applications and executables are also part of the data and move together with them.


Yes, you can already install an application in a VM or container (or just instantiate one of the many ready to use images out there; shameless plug https://bitnami.com).

But sandstorm is something else. It composes well. You can have an application that is a good document editor and another one that offers a spreadsheet that works for you, and have a unified layer that handles identity, authorization, document management (collections, sharing).

Short slogans as "liberate your data" or "self host your apps" focus on the end goal without highlighting what is really stopping us from doing it: the integrated, cohesive experience many of us expect/need, especially in the enterprise environments.

I recently went through an acquisition where we had to switch from gsuite to office 365. Oh my. What a mess.

In the ideal world we would have had our stuff in sandstorm grains, and after the merger our new colleagues would have access to them even if we wouldn't necessarily have picked the same spreadsheet app for example.

Now, for those of us that Word doesn't work well and were happy with Google Docs, well, we don't have a choice: we cannot possibly give a gsuite account to every employee in the larger company and thus we have to migrate in order to not preclude potential collaboration


I just found out about this project from this post, and I've spent my whole night reading up on it and the ecosystem and its past. I LOVE the vision and was so excited to read about it, but then sad to realize I'm joining the party too late. It's been a roller coaster of a night.

I think the most important piece of tech that Sandstorm was working on was the capability based security and the powerbox concepts (which I recognized as being similar to Android's capability apis but for the web). I don't see the decentralized data or local servers being an easy sell any time soon, but I can see the capabilities security working really well within the browser, and more importantly with existing SaaS ecosystems.

If we could get a new browser API for websites to offer to register available APIs for jsonschema based type defs through a manifest file, then users could allow those APIs to be included in their local browser registry. Other pages could request (or offer) data from those APIs, and with the user's permission, the browser could do the necessary oauth handshake and use the API, passing data to/from another service in a secure way behind the scenes.

As well since it would be a browser API, a webpage looking to call an API would be able to request provider info and show the appropriate in-app UI for selecting a service, which sounds like one of the problems Sandstorm had.

No one would have to worry about whether they implement a dropbox api or onedrive api, the page would just have to show the 'file service' selector and call the user selected api with a file. So I'd expect every SaaS out there would jump at the chance to provide an easy access way for users to use and pay for their service.

Furthermore, the browser could provide external connectors as well so I could have some native external app that registers a provider with the browser, allowing me to send my in browser data to an external native app (or vice versa). For example right click menu on a .drawio file and send to DrawIO or send to the currently opened Google Slide. Or send a contact from your fav web contact manager to the your native Skype app. Seriously I can think of so many uses for this! And it fits in perfectly with the existing file or clipboard apis as well.

I hope someone sees this excited rant and can make it happen! :)

I also really do hope the work that's been done in Sandstorm continues on and pushes things further for all of us. Thanks!


Wasn’t much of this covered by the whole Semantic Web effort? They built a ton of stuff for self-describing data (RDF) that federated over multiple hosts by default (SPARQL), provided meaningful names (URIs everywhere) and multiple tools for giving all this semantic meaning (OWL, RDFS).

It stalled for many many reasons, including:

- XML is kind of awful

- Over-general solution without clear advantages for a specific use case

- Mis-aligned incentives

- easy to abuse open query capabilities

- Hard to use for both data publishers and consumers

I think that JSON Schema with incremental enhancement via JSON-LD is a more promising tech stack for another try. That would let you take advantage of the massive investments in the current API ecosystem while carrying forward the best parts of the old Semantic Web Effort.

Of course the incentives are still hard to sort out. Commercial entities WANT lock-in. They will Embrace/Extend/Extinguish anything they can because they need a moat to make money. Honestly anti-trust regulations might be needed.


I think that idea was the concept behind WebIntents/WebActivities, which were explored by the Chrome team and by Mozilla. Efforts seem to have stalled, though.

https://en.wikipedia.org/wiki/Web_Intents


Thanks for the link!

Ya this looks like it could've definitely fit the bill. Too bad it never took off. And worse yet, both major browsers tried their own version and neither took off.

This just seems so easy an idea to get on board with, like VSCode solving the n-to-1 problem with their language server api. I don't understand why it fizzled out.


Loved your work & was an early supporter. If you haven't already, I think you should take a look at some really cool new possibilities in the app development space like automerge/hypermerge, braid.id, and gun.eco. These offer a new path forward where a home server is just one possible topology, but the data is freed from servers. See also https://github.com/SmithSamuelM/Papers/blob/master/presentat... (I particularly like how Sam has coined the term "autonomic data", meaning data that embeds its own authorization through encryption--once you separate authorization from distribution, you can do some amazing things).


One thing I appreciated about the Sandstorm architecture, and I wonder if you now consider this a mistake, is that each grain holds its own state, using ordinary files and possibly in memory (since IIUC, there's one instance of a grain running at any given time).

But I would have liked to use an architecture like this in a SaaS application. We may wish that applications were self-hosted, but SaaS is what people are used to, and there's a lot of money in it. An architecture for SaaS applications based on many small, stateful units, like Sandstorm grains, would be a refreshing alternative to the current norm of trying to make everything stateless and web-scale. And I think it's something you could have sold. I remember proposing something like this to you on an HN comment thread a few years ago, but maybe I didn't articulate it well.

The reason I wonder if you now see the statefulness of grains as a mistake is that Cloudflare Workers are going even further in the other direction. Whereas Sandstorm grains can be long-running, Cloudflare Workers are very short-lived. Of course, I realize that you aren't explicitly developing Cloudflare Workers as the successor to Sandstorm; it's a new product with its own requirements. Still, I wonder if your thinking on that part of Sandstorm has changed.


I still very much believe in the fine-grained instances model.

Cloudflare Workers is a long, long way from done. Stay tuned... ;)


An architecture for SaaS applications based on many small, stateful units, like Sandstorm grains, would be a refreshing alternative to the current norm itof trying to make everything stateless and web-scale

Can you expand on this? The way I see things, the more of my stack that's stateless the better. That goes equally at the application level and at the distributed system level. There are so many advantages to designing things that way that it's hard to imagine going back to the stateful "bad old days".

So I'm curious to read a treatise about this alternative small, stateful units model you describe.


Statelessness is desirable in large-scale cloud architecture because it means you can auto-scale by just running more instances of the thing and load balancing, and you can fix any problem by just nuking the whole instance and making a new one. Also, storing data reliably in a large distributed system is an extraordinarily difficult problem which most people have no hope of getting right, so it's best to rely on an off-the-shelf database written by people who know what they're doing.

But these are problems that only exist because you're trying to create a mega-scale centralized application.

In Sandstorm's granular model, you don't have mega-scale applications. You have many distributed small instances. The programming model ends up being much more like desktop or mobile apps. In those environments, state has never been that big a deal. You don't need a database that can store and index petabytes of data; you can use something simple like sqlite or maybe even flat files. You don't need to think about machine failure, backups, etc.; that is the OS and/or device owner's problem. (Meanwhile, small-scale instances lead to clear data ownership which means that the owner can take responsibility for backups, and a distributed OS can potentially migrate grains around as needed to work around machine failures, transparently to the app.)

Another big problem with the stateless approach to big web apps is that off-the-shelf databases are often a terrible fit for specific use cases, but typically changing the database to suit your needs is not an option. Databases tend to be particularly bad at real-time updates, e.g. like in Google Docs where you can see other people typing. To implement Google Docs reasonably, you need a stateful server; you need for all users of the same document to land on the same server instance so that coordination and routing can occur in-memory. That's incredibly difficult to achieve in the stateless cloud architecture orthodoxy, but incredibly easy to achieve in Sandstorm's model.


> you need a stateful server; you need for all users of the same document to land on the same server instance so that coordination and routing can occur in-memory. That's incredibly difficult to achieve in the stateless cloud architecture orthodoxy

Exactly, that's what I was getting at. I'm sure Google has a lot of internal infrastructure and know-how for doing this. But outside of giants like Google and Microsoft, AFAIK, tools and techniques for developing stateful web applications that are reliable and scalable don't seem to be widely known. And the orthodoxy, as you aptly put it, is so strong that I'd venture to say that most of us don't even consider the stateful alternative. So for those of us who do think about it, it seems like the unsafe choice that we're better off avoiding. I hope that changes.


We might learn from the gaming dev community, where stateful servers are quite common. It's interesting to see that for example agar.io (web-based realtime game) started in C++ and moved to Erlang, both of which uncommon choices for web software, though the latter seems to be rising, particularly in the form of Elixir.


I'm wonder how much of those are actual problems and how much is FUD though.

At least, I remember benchmarks for nodejs a few years ago that showed a single instance handle a million concurrent, long-lived connections - on a single thread!

So if you're not aiming for Google-scale load, the field still seems interesting to explore.


For starters, I'll refer you to Sandstorm's own "how it works" page [1], which explains it quite well IMO. Basically, the Sandstorm modl works well for applications that consist of many small things -- documents, chat rooms, project boards, etc. -- each of which can easily run in a single process. Each grain can run in a self-contained process, with its live state being stored in memory, and using the filesystem for durability. When things happen within the grain, it can easily send out notifications to all connected clients, e.g. through WebSockets, because all clients are communicating with that one process (indirectly through a proxy, of course). Consider how much easier it is to implement real-time things, like chat and collaborative document editing, this way, compared to using message brokers, distributed pubsub systems, etc. As for scaling and high availability, as the page I linked explains, the platform takes care of that.

[1]: https://sandstorm.io/how-it-works


We're broadly working towards a lot of what you're saying here at https://micro.mu. It's an incremental process which I fundamentally believe needs to begin with giving developers a new method of development guided by a framework. We're leaning more towards the android model of managing a complex runtime and providing the framework for developing in that environment.

We'll probably leverage cloud, edge, personal servers, mobile and any other device in the long term to move services to the user.

I share a lot of your ideas and philosophy. Shame it didn't come to fruition with sandstorm.


Any Sandstorm developers reading this... I would like to invite you to reach out to me. We’re trying to build this decentralized future and we’re making revenues — would be great to explore collaboration.

Username: greg Domain: qbix.com

See https://qbix.com/QBUX/whitepaper.html for the economics of it


>never mind that such architecture is actively hostile towards small-scale self-hosting

Can you provide some information about why this is? Are you talking about vendor lock-in with certain cloud providers? Because Kubernetes, for instance, provides a layer that lets you move applications around (even locally with Minikube) with no fuss. So I suspect I'm misunderstanding what you're referring to.


1) Maintaining a personal server should be no more difficult than maintaining your phone, something that non-technical people can do. No command lines, no config files. Kubernetes obviously isn't that.

2) Kubernetes and other popular cloud infrastructure are designed to scale up to massive traffic, but they are not designed to scale down. For a personal server, you want to be able to install hundreds of apps on a single, modest machine, and only have apps consuming resources when the user is actively using them.


Thanks for the thoughts.

> My new hope is that "mainstream" cloud infrastructure will push towards being incrementally more and more decentralized on a technical level

That sounds pretty close to what I'm doing with my current project. Start with a centralized platform that is instanced per-user under the hood, provide tangible benefits vs the big guys, and eventually open source it for self-hosting. That's the dream anyway.


I always thought one of the big barriers was porting existing apps: you need to disable/rip out their auth and user management, and you need to pare them down to operate on only a single unit of whatever thing they manage (a grain). These changes are often hard to upstream, so you’ll be applying upstream updates over and over. I was hoping we’d see a bunch of small purpose-built apps that were perhaps companions to a standalone app that just needed a small internet-accessible component, such as IoT devices or even mobile apps that need a way to host shareable links.


I investigated capnproto for a while, how do you feel about them compared to the alternatives?


Disclaimer: I've only looked fairly close at capnproto, RSocket, and gRPC, never used them myself.

It depends on what you need to do. What's your use case? My primary constraint is that I need web browser compatibility, ie websockets currently. If capnproto had a solid browser JS implementation I would said use it hands down if it has the features you need. There's some work being done to shoehorn gRPC into the browser[0], but it seems very complicated to me (requires a proxy server IIRC). If you need a robust browser solution today, take a look at RSocket[1]. I think the reason rsocket isn't talked about more is that it comes across as very "enterprisey", but on a technical level it looks very good to me. You may also be interested in my minimalist approach, omnistreams[2]. It basically adds backpressure and multiplexing on top of websockets. We're running that in production at iobio.io, but the API isn't stable yet (would love external input).

One big caveat of capnproto vs gRPC is that capnproto doesn't really have built-in "stream" mechanics, ie the concept of making a request that you expect to return an unbounded list of elements. I've mentioned this before and kentonv explained a way it could be mimicked though[3].

[0] https://grpc.io/blog/state-of-grpc-web/

[1] http://rsocket.io/

[2] https://github.com/omnistreams/omnistreams-spec

[3] https://news.ycombinator.com/item?id=20041250


My reaction after reading this is that if it's too much burden for the original author to keep up with updates, it's very hard to make a case for people to self-host their own services using Sandstorm. Upgrade effort will now get multiplied many-fold as everybody has to duplicate the effort that one maintainer was putting in.

Sandstorm is an amazing project, and the people who built it are way better programmers than me. But I'm betting that we need to go even deeper and think about why the maintenance burden for software is so high.

https://github.com/akkartik/mu#readme

This is a very long-term project, and it's still early days. But none of the short-cuts seem to pan out. It makes me sad, and it stiffens my resolve.


> My reaction after reading this is that if it's too much burden for the original author to keep up with updates, it's very hard to make a case for people to self-host their own services using Sandstorm. Upgrade effort will now get multiplied many-fold as everybody has to duplicate the effort that one maintainer was putting in.

That's not correct.

Self-hosted sandstorm is completely auto-updating. You don't have to do anything. I still have to push updates periodically, but I intend to keep doing that. After I push an update, all Sandstorm servers update themselves automatically within 24 hours.

Updating Oasis was, ironically, much more work for me than pushing an update to self-hosted sandstorm. Oasis is built on a much more complicated cluster architecture designed to be scalable -- scale that, sadly, we never actually needed.


Kenton, pardon my hyperactivity, but here is a couple of observations you may find useful:

>Oasis is built on a much more complicated cluster architecture designed to be scalable

You could dog food Oasis on top of Sandstorm. For example, Oasis could be a Sandstorm app and use Sandstorm services. In this way, you could feel very closely where and when Sandstorm is lacking. And once you fix or invent those parts, the results would be available to all of your customers, and not exclusively to Oasis. This would shift the laborious spot from Oasis to Sandstorm, and would allow you to open up the enterprise gates.

The second observation is a model of sales. You sold a "decentralization idealism" as you coined it, but with an optional central place like Oasis. In my opinion, this offering represents a natural conflict and is not destined to work in a sustainable way. Why not sell a good old license for advanced/enterprisey features? Oasis would be a great add-on to that model for those who wanted to rent the resources. And yes, exclude the free tier from Oasis to make things fair.


Oasis is running Blackrock, which is basically a cluster architecture version of Sandstorm. And while it was originally an enterprise/paid offering only, Blackrock is also now open source. However, Blackrock was really never built out as far as intended, and really only runs on Google Cloud.

There's a fair bit of history you may not be aware of. For instance, the payment system used to be part of Blackrock, but then was moved over to Sandstorm when it was open sourced so you didn't need to run Blackrock to sell subscriptions to a Sandstorm server. And there used to be paid-only features, but when Sandstorm-the-company ran out of money, they were made free as well.


> You could dog food Oasis on top of Sandstorm.

That's not gonna work. Sandstorm is explicitly not designed to be big-cloud infrastructure, it's designed to host small-scale apps. Oasis is intended to be a big Sandstorm server that utilizes a cluster of machines. It therefore needs to sit on top of big-cloud infrastructure, not other Sandstorm servers.

> You sold a "decentralization idealism" as you coined it

That was teleclimber's phrase, not mine.

> In my opinion, this offering represents a natural conflict and is not destined to work in a sustainable way.

I agree, if I were doing it over I don't think I'd build Oasis. I'd focus instead on federation and migration features, and partnerships with hosting providers and device manufacturers to make setting up a private Sandstorm server as trivial as possible.

> Why not sell a good old license for advanced/enterprisey features?

We tried that: https://sandstorm.io/news/2016-08-31-sandstorm-for-work-read...

IIRC our total sales of Sandstorm for Work could be counted on one hand.


Back in the day I approached Sandstorm by email and asked if they could provide an Enterprise/Server edition of Sandstorm (hosted apps with stable/customizable endpoints, authentication and stuff).

I directly communicated to them that we are ready to shell out something like $2,000 per license.

The answer by Kenton surprised me. He told they are aiming to get the consumer market first. Which is a bit odd, given the server is where the money is.

Enterprises of all kinds already have their hierarchies of customers. Moreover, they are in a constant need of a simple but highly efficient server platform (see Kubernetes of a future). It would be a much easier sell to them.


You must have caught us early on, when our focus was more on attracting developers than attracting users. Later on we were trying very hard to find enterprise customers who would pay us something like $2000.


As a big Sandstorm user and contributor this is a pretty sad day for me, I've found using Oasis to be pretty darn convenient. But I'll be setting up my selfhosted version this fall. Sandstorm hasn't seen a ton of improvement over the past year but that hasn't been a bad thing: It's been a consistent and rock solid product I can rely on whenever I need it.


Insightful sentence:

> While Sandstorm was popular on Hacker News, that popularity never really converted into paying users.


I alway felt that Sandstorm would be an ideal project for one of the NAS vendors like Synology, QNAP, or Netgear to get behind.


This is one of the coolest, boldest concepts I ever saw, and put forth by an extremely capable programmer and human. I look forward to hearing more from Kenton in the future.


Kenton - You say we shouldn't rely on you to safeguard our data. How does this affect security updates? As it is, I don't put awfully sensitive things on there anyway because Linode still has access to it. But it would be good to know where to draw the line. I'm hoping that since you rely on it for email, you're gonna do basic security hole plugging at least?

Thanks as always in either case. Hoping it somehow gets new life down the line after this!


Yes, I plan to continue pushing security updates for Sandstorm as I have been for the last five years, and self-hosted Sandstorm is very good about auto-updating.

Meanwhile, with a self-hosted server, you get better security (compared to Oasis) due to the fact that only you and people you authorize can install apps on that server. For even more security you can of course put it on a private network or behind Cloudflare Access[0] for extra defense-in-depth.

Note: I don't rely on Sandstorm for email yet, because Sandstorm doesn't have good email support currently... but I'd like to build that support, because I'm getting really uncomfortable with gmail.

[0] https://www.cloudflare.com/products/cloudflare-access/ -- Disclosure: I work for Cloudflare.


Wow I started to implement essentially the same thing for small businesses, I did not know about Sandstorm. Does this means this is a bad idea?


I asked Kenton V about the reasons for the failure of the business. You might be interested in his reply[0].

My 0.02: I'd interview your target market to find out if this is a pain point they recognize. Many of these projects are started out of decentralization idealism (for good reason, and it's not a bad thing), but as a business you do not want to be left having to educate your potential buyers about why they need this.

[0] https://twitter.com/teleclimber/status/1173345736013910017


Agreed. You shouldn't have to explain someone's problem to them. Ideally when you present a good solution they will immediately match it to a pain point they're already experiencing.


To be fair, we didn't try to go around selling businesses on "decentralization idealism"; that would obviously be silly.

Plenty of enterprises already need to self-host services, for compliance reasons (ITAR, FISMA, HIPAA, FINRA, GDPR, national data locality laws in Germany, Russia, China, South Korea, and others), or sometimes even just paranoia. The need is very much there.

But, like, we literally didn't know where to start. Pick up a phone and just, like, call people? Apparently that's how sales works but I'm sure as hell not the person to do it. I hate it when people call me! How could I call someone? So we didn't call anyone, and mostly hoped that fans on Hacker News would go sell Sandstorm to their IT departments for us. That was dumb and didn't work.


There's a bottoms-up path that some enterprise saas companies have taken (Slack, Airtable come to mind) where employees/departments just start using it to get stuff done and eventually IT finds out about it but by that point it's ingrained enough where IT signs a contract just to try to rein in all of the one-off spending and data leakage. But for that to work I think the apps would have to have above-and-beyond appeal to individual employees whereas I think a lot of sandstorm's appeal is to the organization itself.


Yeah, exactly. We tried to follow that model, but Sandstorm really wasn't prepared for it, because Sandstorm apps weren't actually better than cloud-based alternatives and so there wasn't a motivation for employees to adopt them.


> To be fair, we didn't try to go around selling businesses on "decentralization idealism"; that would obviously be silly.

Sorry, my comment should have been more clear. I wasn't referring to Sandstorm or you in my comment and I didn't mean to accuse of you of such silliness. It was more a reaction to the poster above who said they were targeting "small business" without specifying which kind of small business and what they are solving for them.

Anyways, yes, there are many reasons for businesses (even small ones) to self-host. Other reasons include the ability to customize (you can't customize a saas, unless you're a gigantic customer), and longevity (Saas come and go, and switching costs can be high).


Yeah I definitely wasn't referring to sandstorm either.


Businesses may not specifically want "decentralization idealism", that's true. What they want is "Windows for Web".

Something where you could install an app and bingo, it instantly worked on all endpoints around the globe.

Or where you once install "yet another CDN provider" component (app), and it instantly and automatically covers all the installed apps.

Kubernetes came close to this, but it requires significant amount of expertise, and honestly it is a matter of time until it gets smashed with something better.

In short, what we (businesses) want is "Click and serve". Sandstorm was so close and yet did not make the expected steps in that direction. This makes me sad, but this is life.


>Kubernetes came close to this, but it requires significant amount of expertise, and honestly it is a matter of time until it gets smashed with something better.

I wish this was true, because a lot of code in Kubernetes is garbage and their entire architecture of constant reconciliation is quite wasteful, but at this point is has essentially won in being supported by vendors. It'll be a long time before something can really push it away. Which sucks, because nobody wants (operational complexity) or can run it outside an enterprise environment, at least not in a way where you can expect a majority of helm charts to work (insanely high compute requirrements for control/instrumentation plane, requirement for loadbalancer and PVC provisioner implementation)

I really want a unified "let's not think about servers" model for using compute and storage but kubernetes is not it for consumers.


Over-complicated "industry standard" solutions tend to fade away after the hype has died down. Examples that come to mind include SOAP and OpenStack. If you think Kubernetes falls into that category, I think it will go the same way before too long.


The difference between OpenStack and Kubernetes is that one was meant to power alternative private clouds and the other just adds an abstraction layer on top of public clouds so everyone can buy into the lie of being independent from them. In my experience the bare metal code paths in Kubernetes are anywhere from mostly untested over entirely broken to just not supported in the first place.


Here are 26 private cloud Kubernetes users who would disagree: https://www.cncf.io/projects/case-studies/?_sft_cloud_type=p...


It's clear that you have zero experience with Kubernetes on bare metal. Why bother commenting on the topic then?


(This is a bit of an exaggeration but you get the idea.)


This is subject to the specifics and execution. The likes of BitNami, NextCloud, Cloudron, Yunohost continue to operate, acquired or standalone.


Did you take VC?


No.


A week ago I mentioned that it was half dead, guess this is the final nail in its coffin.

https://news.ycombinator.com/item?id=20933849


My hope is that someday we'll see a resurgence of development for it, or the creation of something new like it. A lot of open source projects have survived on a very slow burn for a very long time, only to show up again later.

What's truly unfortunate, is Sandstorm was massively ahead of it's time: A lot more people are willing to understand the need for Sandstorm today than they were just a few years ago, when Sandstorm was an active development project. I feel like if Sandstorm had launched in 2019, it'd have enjoyed a lot wider support than people were ready for in 2014.


There will be a continual need to "hold your own corner" of the internet (self-hosting common services) as it becomes even more segmented by Google, Facebook, and advertising companies like them.


as an alternative, cloudron.io is a good way to run apps on your own server. I think one of the main differences is that the app packages are regularly updated and maintained. And it uses docker, so a bit easier to sort out problems.


It also costs $30 a month if you want to use it with more than 2 apps. And I think Cloudron is an excellent project and I would be happy to pay, but paying 3x or 6x the cost of my VPS for an easier way to run apps is just too rich for my blood.


https://sandstorm.io/install

> Alternatively, you can let us run the server for you: Use Sandstorm Oasis

They still have this remark on install page.


There's probably several references to signing up for Oasis on the website yet. They'll all get purged with time.


This is sad. But there is still hope. Open source is very hard to kill. Anyone, anywhere can take up the mantle and Sandstorm will go on.


What is it?


Sandstorm is an open source platform for web apps. Basically instead of having to host a bunch of different servers for different web apps you want to run on a server, they're packaged and installed in a way much like you'd install and run apps on your phone. Just, on a web service.

For me it entirely replaced Google Docs (using Etherpad and EtherCalc) and Trello (using Wekan) in particular. I'm also super dependent on it for my RSS reader with Tiny Tiny RSS. Everything on it you could host yourself individually, but it's easier and more secure via Sandstorm.


For those who think this sounds like Docker: in a way, it's similar, but while Docker is mostly designed for packaging and running regular Linux server software, Sandstorm provides a common framework (particularly around authentication, authorization, configuration and upgrading), which makes the whole user experience much more unified, and requires less fiddling with configuration files or terminal commands, while exposing a smaller attack surface, even if the attacker manages to exploit a particular app (there are various levels of isolation, even between different documents of the same app).

https://sandstorm.io/news/2014-08-19-why-not-run-docker-apps


I also like to describe sandstorm as a bit closer to native mobile app development: the developer building the app has to think harder about what permissions (capabilities) their app needs, and in exchange a non-developer can install the app safely and have more confidence that the app isn’t full of security holes or leaking their data, intentionally or unintentionally.


I read the article, but I'm still unclear on what "Oasis" is. Is that what they call their free offering?


https://oasis.sandstorm.io

"""Sandstorm Oasis is hosted by the Sandstorm team. Sandstorm is open source; you can host it on your own server."""


Oasis is the hosting service they offered (for hosting Sandstorm instance).



Sounds like awesome tech with a bad sales plan. Would you consider selling?


The tech is all open source. Anyone is free to try to make a business out of it (under a new name), no need to buy from us.


Selling what?


Thats sad news


Never heard of them. June 31, 2020? I'm guessing this is a weird leap-year bug.

Dates are hard. Use a library. Always.


No it's just me, a human, being an idiot and forgetting how many days are in June. Fixed now.


Fair enough. I'm glad you fixed it.


There's a not un-serious push to switch to permanent daylight savings in Cascadia (western part of north america). I wonder what this will do to all the date code that doesn't use a library or libraries that aren't updated.


DST rules change all the time, all over the world. Any software that gets DST right today must be using a library. Even if you only look at the US, ten years ago Congress changed the dates that DST starts and ends and it wasn't a big deal. There's also many places in the US today that use permanent DST or permanent non-DST, so that's not even new, really.


Heh. Hence my comment. I actually had to deal with that DST change on a bunch of embedded systems back in 2006 (or so). Super fun pushing updates to 100+ systems over analog modems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: