Hacker News new | past | comments | ask | show | jobs | submit login
REST was never about CRUD (tyk.io)
323 points by elkinthewoods on July 19, 2018 | hide | past | favorite | 207 comments



I know it's not technically correct, but the conceptual definition I've found useful for myself is that REST is about operating on the HTTP layer instead of building your own layers on top of it. Why create your own custom error protocol when you can use HTTP error codes? Why develop your own caching strategy when you can use HTTP caching? Why create your own authentication mechanism when you can use HTTP authentication over TLS (or client certificates, or etc.)? When the article talks about how REST should be layered, cacheable, and client-agnostic, that's really the core of it to me.

By thinking this way, you can see how REST helps you re-use already-existing infrastructure---proxies, load balancers, caches, etc.---instead of spending time and effort making custom solutions. It also helps provide guidance as to when REST may not be appropriate: can you answer those "why not HTTP?" questions?

I've also found focusing on the resource/noun aspect (beyond considering caching) to be a bit of a "nerd trap"; it's possible to spend a lot of time chasing the perfect, "correct", solution. As the article notes, there's nothing wrong with "verby" URLs as long as you are using HTTP instead of doing your own thing.


None of this has anything to do with REST. You can do RPC over HTTP and do all those just fine, that's just called being a good HTTP citizen.

REST is about hypertext-driving[0] and essentially nothing else.

[0] https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


That link is a great representation of why the term "REST" is so often misused; the author's language is opaque and he makes no attempt to make his thesis accessible to readers. He speaks entirely in abstractions and generalities; nothing is grounded in familiar terminology.

The very first comment from that post asks what he means by "hypertext", a fairly fundamental prerequisite for following his thesis. He replies with the following:

> Hypertext has many definitions

> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions

I challenge anyone to transform something as simple as "hypertext is text with links to other text in it" into a more convoluted and unnecessarily abstracted definition as the above.


> the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions

What's unclear or abstract about that? To choose an action, the agent must know what actions are available and how to call them. Whatever presents that information to the agent is the affordance. Fielding says that in hyptertext, you get that as part of the actual content as opposed to via documentation or some other channel.

> I challenge anyone to transform something as simple as "hypertext is text with links to other text in it" into a more convoluted and unnecessarily abstracted definition as the above.

Because that's not what it is, in general. See Xanadu for an example of vastly more general kinds of linkages than references.


> Because that's not what it is, in general. See Xanadu for an example of vastly more general kinds of linkages than references.

The purpose of very abstract, generalised, academic (also legal) language is to attempt to achieve perfect technical correctness in description. Unfortunately this usually comes at the cost of understanding.

"Text with links to other text" obviously won't include 100.000% of applications conforming to RESTfulness, but it does describe what RESTfulness intends far better and more accessibly than a convoluted explanation, which very effectively serves the purpose of informing people calling RPC REST that it is not in fact REST (for example). It also gives a broad audience a general direction to work toward if they intend to use REST (or a reasonable argument not to use it if it's not what they feel they need).

The accessible explanation serves an actual practical purpose, the pedantic explanation has no actual benefit. The fact that you had to reference a project as esoteric and dead-end as Xanadu is testament to this.


>What's unclear or abstract about that? To choose an action, the agent must know what actions are available and how to call them. Whatever presents that information to the agent is the affordance. Fielding says that in hyptertext, you get that as part of the actual content as opposed to via documentation or some other channel.

But then that would be wrong. You had to know, through documentation and other out-of-band education, what you’re supposed to do with these `<a href=...` things. Ditto for any of the other options a page might present.

When you first used the web, you didn’t know what href meant. You had to read some instructions, or depend on UX designers’ decision to graphically represent a-tags in a way that suggests clickability and followability. That’s already outside his REST model.

Fielding is not being clear on how REST’s assumptions about “what you already know out-of-band” differ from other models.


> I challenge anyone to transform something as simple as "hypertext is text with links to other text in it" into a more convoluted and unnecessarily abstracted definition as the above.

Except text+links does not encompass the full meaning of hypertext. Forms and inputs are also possible, for instance. The more general definition you cite encompasses all of this (information+controls), and more possible innovations than your very specific definition which lists only the current ways.


That was my exact thought process. No examples, familiar terminology, or simple language: just architectural astronautery par excellence.

I always find myself sceptical of people who appear to hold an almost religious level of belief about some programming construct or protocol and yet are unable to express themselves plainly. It comes off like a developer-ish version of business speak and isn't generally useful for moving a discussion forward or solving any problems.


He's just an academic, using academic language.


some people just love to write with as much big words as possible. i hope they try challenge themselves in the opposite, to use as much common words as possible and see what comes out of them.


It is because he seems to be thinking of hypertext as html (i.e. he is mixing the concept “hypertext” with the language “html” used to realize the concept (and much more)).


Unfortunately, that is typical of academic texts, where that embellishment is expected, if not almost mandatory for the whole thing to be approved by the jury.


While it's certainly true of quite a few academic texts, I've read quite a lot for which it's not the case. I don't think it's mandatory, particularly for jury approval, rather just a quirk of many academic individuals.

Moreover, this is blog post and comments on a blog post: academic juries aren't the intended audience.


I've basically abandoned the term "REST", because for all X where X is the current topic of conversation, REST is not X.

Any term this confused just isn't useful.

"Monad" seems to be similarly confused, but there's a concrete math concept to fall back on, and concrete definitions in various languages. REST doesn't have that. No matter where I try to stick the pin in the landscape and say "That is REST", I am reliably informed by half-a-dozen people that no, that's not it, it's over there... and I get six different "theres".

(No, the thesis does not define it, in practice. Even when people reference it, they still point to different places! And besides... so a guy sat down and wrote a thesis paper on a word... that's great and very nice for him, and not worthless, but let's not go overestimating how much claim that gives you to a particular word. I certainly don't grant it enough claim to overcome the fact that still nobody knows what it means.)

Useless.

(I've carefully written this to be clear that I'm speaking about the term. Whatever $YOU think REST is may very well have good ideas in it, but labeling it REST only makes things worse.)


> '"Monad" seems to be similarly confused'

In most cases Monad goes the opposite direction than REST: most people discover that yes, what they've been writing is a monad, they just didn't know it. Conversely, most people are told that no, they are not doing REST right, because REST is actually about <something else> ;)


Monad is a scary name for a very simple thing, while REST is a very simple name for something that is probably scary, because I don't think I have ever seen it.


Terminological confusion appears to be our bread and butter these days.

There's large companies calling their MVP "MVC", and then you have to explain that "MVC solves the problems of MVC" (-> real MVC solves the problems of Apple MVC, and is being rediscovered all the time with "unidirectional data flow" of things like React).

And then there's "React", which doesn't actually have anything to do with "Reactive", but has similar enough marketing that you think it might.

Which gets us to the current champion of muddled terminology: "Reactive Programming" or "FRP".

So although the various "reactive" (Rx) approaches are often called "FRP" (Functional Reactive Programming), they actually are not at all the same thing. At least according to the person who came up with FRP and coined the term. Of course, he no longer thinks that was a good name for what he did, preferring "Functional Temporal Programming".

And that's a good point, because as Backus points out (see https://news.ycombinator.com/item?id=17564186), FP doesn't handle time well, and so Functional Temporal Programming tries to address that deficiency.

Does that mean that the others are off the hook in calling their stuff "FRP" or "Reactive"? Nope, as it still doesn't make much sense, "reactive" being a description of systems not of implementation technology, and the implementation technology being a form of (synchronous) dataflow that can be combined with either imperative or functional languages.

So there is nothing essentially functional or essentially reactive about this stuff, and so when you add Rx or "FRP" to a non-functional language, you are adding a somewhat convoluted way of expressing dataflow, convoluted because of the unnecessary detour via FP. And while it does add the ability to build reactive systems to FP languages (which otherwise lack this ability), it does not add reactivity to non-FP languages.

And although the word "reactive" and the language used to describe these systems suggest high performance, most "FRP" libraries add about an order of magnitude of overhead...so they are slow.

Compared to that, REST is straightforward and well-defined.


Meanwhile I'm going to keep writing code that does stuff, and avoid labeling it.

Purity is useful in places, but serves no purpose if complicated enough that I can't explain it to my team.

FWIW React is a nice enough library, JSX is the first and only angle bracket UI design system that I like (sorry XAML), and it all works well enough with comparatively little ramp up.

I'll let other people argue what the proper abbreviation to describe my code is.


> and avoid labeling it.

There are two hard problems in computer science: cache invalidation and naming things. Yes, and off-by-one errors.

Yes, you can just build stuff and not worry. And that can work out fine. On the other hand, muddles like these sow confusion and make sure you can't figure out how things hang together.


> Why create your own custom error protocol when you can use HTTP error codes?

Because HTTP error codes are fixed and have well-defined meanings for HTTP that may not map well to your application. Repurposing HTTP error codes conflates your application-specific errors unless it maps to an exact subset of one of the predefined errors.

> Why create your own authentication mechanism when you can use HTTP authentication over TLS (or client certificates, or etc.)?

Do you have a client cert? I've never encountered any server that supports client certs in my entire 15-year career. Also, TLS is not a part of HTTP and HTTP only supported Basic Auth until fairly recently.


Repurposing HTTP error codes conflates your application-specific errors unless it maps to an exact subset of one of the predefined errors.

...and beware even the errors that match exactly! I've seen multiple systems in the wild that treat 404 as an application-specific error for "content not found". It sounds perfectly reasonable at first, but it puts you one misconfigured reverse proxy away from inadvertently broadcasting to the world that all your content is gone.

This can have data-corrupting effects. If an external system deletes resources via a reliable message queue and treats 404 as a success condition, they run the risk of dropping these messages and creating an inconsistent state. I've seen it happen.

Keep application-specific error messages distinguishable from the 'plumbing'.


That seems like a bug with the external system. 4xx means the request needs to be corrected to achieve success. 2xx means success, even if success means "deleted".

If a system treated 5xx responses as successful, you'd consider that a problem with the system's response handling, not a problem with your own communication of state. So why consider abuse of 4xx to be a problem with the communication protocol?


The external system didn't define the API; the service did. If you poke around on stackoverflow, you'll find quite a number of answers recommending this pattern. It's a bad idea, but it's not uncommon.


Recommending what, that a `DELETE /foo/bar` should return a 404 on success? That's flat-out wrong. It should return a 204 No Content. A subsequent `GET /foo/bar` should return a 404, but presumably the message queue isn't doing a subsequent GET but is inspecting the result of the DELETE.


Ok. But again, a bad design choice doesn’t mean using HTTP error codes is a bad idea. It means that like with anything else you need to know your tools.


If it's put in a message queue, use a 202 Accepted somewhere, if it's added with HTTP requests ? 201 Deleted only if deleted in real-time. Then check later for proper deletion on or push status of the job to the consumer. 404 as a success condition does not seem to be reliable to me


So.much.experience and wisdom in this answer. I learned something new today. Thank you.


> If an external system deletes resources via a reliable message queue and treats 404 as a success condition, they run the risk of dropping these messages and creating an inconsistent state. I've seen it happen.

This would reflect an incorrect design on the part of the external system.

For example:

  PUT /queue/_hyn3
  => {"name": "_hyn3"}

  GET /queue
  <= {"name": "_hyn3"}

  ... processing ...

  GET /queue/_hyn3
  404
404 is not a success condition; it's also not necessarily a permanent error.

Queues might be "reliable", but the consumers should never be assumed to be, and so it's good to have consumers provide a "complete" state update somewhere, and ideally, if the queue is able, for the queue to restore the incomplete item if the consumer fails to complete it in a specific (lengthy) period of time, because the consumer itself might have ceased to exist or failed for some other reason.

If you need to ensure that each message is consumed once and only once, you need to ensure that your queue also has something like a "completed state":

  PUT /queue/_hyn3
  => {"name": "_hyn3"}

  GET /queue
  <= {"name": "_hyn3"}

  ... processing ...

  PUT /status/_hyn3
  {"name": "_hyn3", "status": "completed"}

  GET /queue/_hyn3
  404

  GET /status/_hyn3
  {"status": "completed"}
Obviously, there's an infinite number of ways to handle the semantics here, and certainly REST is not constrained in any way to just JSON, and perhaps you would prefer to put completed in the URL, but the point is that the architecture needs to support such a use case. TL;DR: don't rely on 404 as a permanent error, but it's absolutely appropriate for a document/content/file not found, even if the content will be found later or was found earlier.


> Do you have a client cert? I've never encountered any server that supports client certs in my entire 15-year career. Also, TLS is not a part of HTTP and HTTP only supported Basic Auth until fairly recently.

Client certificates are very common for API authentication (well-known examples: Kubernetes, Puppet, Saltstack, service meshes like Envoy and Consul...)


Surely all these APIs do not demand client certs right? In the case of Kubernetes, a typical server cert is self-signed, so half of TLS's security measures are defeated, to mitigate that you have to white-list clients by checking their certs, but surely this is more trouble than getting a real server cert and sending your credentials in HTTP?


Server and client certificates belong to unrelated certificate hierarchies. You can have your server cert signed by $whateverCA and still run your own mini-CA to issue client certs and validate client certs against.

TLS client certificates are strictly better than passwords because they don't provide an impersonator with a wildcard (and if you are running over the internet, especially with mobile devices, then you can get into that situation w/o being subject of a targeted attack). There are fairly few ways to achieve that other than client certs (SRP and SAE come to mind, both of which have virtually no deployment).


This makes a lot more sense than the answers below thank you.


Mutual strict TLS is quite common in medium and high security environments.


the master certificate is self signed yes.

this self signed certificate now has to sign the client certificate(s). Otherwise, the clients aren't allowed to address the master.

At least thats how i've come across it. Its actually pretty common in the infrastructure world. Your OPS team can probably tell you which services utilize it in your software stack, though you as a developer probably never had to worry about it.

Though a lot of services only use a singular client certificate across all nodes and just revoke the hole chain for rotations


Sounds like these OPS teams don't know what they are doing, but I'm not sure what you are saying is correct. Conceivably, in an ideal situation, the org has an internal intermediate CA created using a cert signed by a real root CA, and that intemediate CA's cert is in every client machine the org provisions to the employees, so the employees' machine can verify the server they are connecting to is authentic. For mutual authentication, the client certs lets the servers authenticate the other way around. Using a straight up self-signed server cert is a one way mechanism from server to client, it's easily spoofed, this makes no sense to me.


its nonetheless the way its being done. a few examples:

* remote docker daemon execution

* setting up a kubernetes cluster with kubeadm and creating certificates for remote kubectl execution (this one might be related to the previous one. I'm not sure as i don't know anything about the kubernetes internals.)

* rabbitmq cluster iirc. At least thats how you're supposed to set it up with the sensu monitoring framework

* previously mentioned puppet does its as well, though the process is mostly invisible to the user

i'm not sure how its supposedly possible to spoof though?

the master needs to sign a certificate for clients. IF the master is compromised, everything is compromised. Thats true no matter which authorization protocol you're using


I don't know how in your experience how these orchestration servers are setup, but in my experience, these clusters are all on the Cloud, which means accessing them requires going out to the public internet. In the off chance someone is actively MITMing the client's connection to the server, which isn't exactly hard, all that the attacker has to do is to present any cert signed with the same subject and issuer and whatnot except the public key. How would the client know if all the restriction is to accept a self-signed cert? There's got to be some sort of pinning mechanism like OCSP must-staple or verification step the client has to do to authenticate the server or the entire org's kubernetes client will be DoS'ed. I don't even have to compromise anything. You are saying self-signed server cert is how it is typically done, I'm questioning whether that's how it's supposed to be done.

The kubernete's document says:

  For an identity provider to work with Kubernetes it must:

  1. Support OpenID connect discovery; not all do.
  2. Run in TLS with non-obsolete ciphers
  3. Have a CA signed certificate (even if the CA is not a commercial CA or is self signed)

I interpret it as saying you should perferrably use a real CA or a well-behaved internal intermediate CA. Anybody remotely familiar with how TLS works will probably tell you the same. If what you are saying is correct, it's a serious problem in our industry.


Well yeah,it's essentially pinned... The client needs a cert signed by the master after all. If you exchange the master cert, the signed certificate is no longer valid

You only need a 'real' CA if your service is public facing.

Heck,even a Windows AD server uses self signed certs

/Edit: after re-reading your comments I get the feeling that were really talking about different things. The self signed certificate becomes the base for your chain of trust. Everything signed by this is allowed to talk to the master. This verification happens on both sides. A client getting an unexpected certificate will refuse to talk to the master just as the master won't talk with a node, unless it's got a signed client cert.

Because the master cert is self signed, it's easier to just throw everything away and recreate the hole chain of trust instead of bothering with revocations.

Finally, there is no benefit in using an official CA if you are in control of everything accessing the resources.

Am I making sense now?


This makes more sense, but I'm still confused by that self-signed bit. By master cert, if you mean that's your internal CA's cert that's distributed across every client and server as a part of your trust root, than this makes sense. If you are saying the master self-signed cert is served directly by the orchestration cluster, than this doesn't make sense, as the clients have no way of distinguishing one self-signed cert from another.


there is no point in importing it. The system store is never referenced in this setup.

The master (this is the self signed part) cert becomes part of the client certs by signing.

These client certs are now only valid with the original self signed cert.

It's essentially a CA, only for the client certificates. It just isn't formalized because it's never imported. you'd also lose the ability to rotate the certs quickly without gaining any security by importing this self signed certificate as a CA.


In properly-set-up Kubernetes, no aspect of TLS is defeated. Instead, trusted CA certs (usually self-generated) are made freely available to everyone and then those are used to validate everything. If anyone is using self-signed certs and disabling TLS validation, then their implementation is probably insecure.


s/probably/provably/, tbh


I was going to say "definitely" but then I had the idea that they could be using some kind of crazy encapsulating proxy on localhost and then still using secure transport to go across the real network... but yeah no... no one should be running k8s on real servers without (fully-validated) TLS.


Puppet uses client certs, clients request them on an initial run and an admin can approve them in an included CA. There is an API and a CLI interface.

I'm always surprised by the resistance to client certs and all the gleeful usage of pre shared keys I find. OWASP top ten bad ideas for as long as I've been writing software.


Regarding detailing errors, we have this, FWIW: https://datatracker.ietf.org/doc/rfc7807/


There's still free ranges of error codes you can use, eg. 460+ https://httpstatuses.com/


>I've also found focusing on the resource/noun aspect (beyond considering caching) to be a bit of a "nerd trap";

Agree when it comes to the difference between POST/DELETE/PUT/etc. but almost every time I've seen somebody doing a GET which mutated state somehow, the people doing it were always dismissive of the importance of adhering to HTTP semantics and it always inevitably came back to bite us in some way.

It was sometimes caching related but there are a lot more things that can go wrong than just that.

Using POST where GET should have been used hasn't caused horrendous bugs in the same way, but it still introduces annoyances.


> Using POST where GET should have been used hasn't caused horrendous bugs in the same way, but it still introduces annoyances.

In my experience, "using POST where GET should have been used" is normally a consequence of URL length limits and HTTP not supporting request bodies for GET. I've never understood the latter decision.


A related issue is the people that see a one to one relationship between the endpoints and the database records that is exposed through the endpoints. As opposed to exposing the business actions through resources and not be dependant on the underlying representation.


  I've also found focusing on the resource/noun 
  aspect ( ... ) to be a bit of a "nerd trap";
Oh, absolutely. Ten years ago, during one of my first projects involving a REST implementation, I was working with a person who I deemed fairly rational, and reasonable to work with.

Coding up the REST paths was a breeze, and I had implemented like 99% of perhaps a month's worth of work in under a week. I laid out some quick docs for how to interact with the URL patterns and, proud of how quickly the whole thing snapped into place, I showed it to the guy.

He immediately jumped all over me, complaining loudly that the URLs "were not RESTful!!!" Which, of course, shocked me. Was he joking? Certainly not. This was a very serious error.

Well, okay! Big deal!

So I spent 15 or 20 minutes performing search and replace, changing words from things like:

  /blue/104
  /read/9008
  /fetch/18
To things like:

  /car/104
  /book/9008
  /page/18
And wowee, crisis averted. I was still weeks ahead of schedule, and the commit was all of 10 minutes ruminating over preferable synonyms to please the sensibilities of humans, rather than struggle with computational correctness.

During this time, I briefly mulled over the idea of trolling him with a many-to-many look-up table, and introducing variable synonyms that include opaque MD5 hashes of my original path names, and produce the same effective resource, such as:

  /48d6215903dff56238e52e8891380c8f/104
  /ecae13117d6f0584c25a9da6c8f8415e/9008
  /5374034a40c8d6800cb4f449c2ea00a0/18
...but I otherwise enjoyed working with him, and didn't want to make enemies.

The pedantry and neurosis some people raise over obsessive semantics is painful to behold.


I don't think your pedantic coworker's criticism is unreasonable at all. APIs, HTTP or otherwise, should communicate what they actually do as best as possible. I have no idea what "blue" is supposed to mean in context, and I struggled to connect "blue" to "car."

What the article is saying, and what I think GP is saying, is that trying to model (non-CRUD) actions as "resources" in the pursuit of being "correct REST" is often a waste of time. `/nouns/8000/verb` is somewhat obvious to me (it does `verb` to `noun` 8000) despite not being "proper REST" whereas `/verb/8000` doesn't (what exactly am I `verb`'ing?), `/green/8000` would just make me scratch my head, and `/9f27410725ab8cc8854a2769c7a516b8/8000` tells me I'm wasting my time with this API and should go do something else.

As a user or developer I think being obvious and usable is more important than being "correct REST" and so I don't even use the term anymore, I just say HTTP API.

You might think it is just "pedantry" or "semantics" but consider that you are not the only one who will be using or reading these APIs, and design accordingly.

(However, if your coworker really was jumping up and screaming loudly at you because of this, this suggests he may not be as enjoyable to work with as you think)


You as an English speaker. What about the remainder of the world?

Must everything be left-to-right? What if the nouns and verbs were not English words? What if the natural order of possessives or adjectives is reversed in a romance language? What if the words are not even cognates, and the characters to express terms are not even letters, but opaque pictographs?

The paths chosen are an abstraction, and REST is designed such that it should be easy to bind any one resource to perhaps multiple URLs. But honestly, to the vast majority of the world, these URLs never see the light of day, and every consumer of a REST URL is probably going to do so programmatically, perhaps once, while reading the docs and setting up tests, and then they'll call it a day, and never look back, because honestly, consuming data from API endpoints isn't actually anyone's idea of "fun" despite wild claims to the contrary.

For the sake of getting shit done, I usually avoid this conversation in practice, but honestly, also because arguing is worthless. People have their opinions as a matter of convenience to their own efforts, and not as considerations to others.

These are the tastes of a given developer creeping in as a leaky abstraction of human readability.


If you take the "two hard problems in computer science" quote seriously -- or if you simply consider that humans are part of the system -- then naming issues probably deserve careful attention.

Then again, if humans are part of the system, it's also probably a good idea to use more reserve with coworkers than "jumping all over me" implies.


I think it's a fine definition, because it really highlights the problems with REST:

The HTTP layer is a very popular place to build applications, but it's fraught with problems and underspecification (or an impossibly large number of poor implementations: take your pick).

The number of TLS guides that recommend ignoring server certificates, or don't check CRL, or don't check certificate-transparency, or don't log and whitelist changes to the public key -- seem to have no end.

In the banking Enterprise, getting whitelisted on the HTTP proxy is infinitely harder than getting a port open.

And so on.

If you control the client, then there's absolutely no reason to use HTTP: Getting security, reliability, and redundancy on HTTP is extremely complicated, to which end I recommend people simply don't use it: Pre-shared certificates and smart clients are much easier to get right. You wan't caching? Specify and implement the strategy yourself (ideally using content caching instead of timestamps or some arbitrary tag).

If you don't control the client, treating HTTP as a tunnel for a protocol with pre-shared certificates and smart client (e.g. OAUTH1a instead of OAUTH2) is still better. You wan't caching? Assume cache-infinity and build cache-busting into the protocol. HTTP clients will lie about the correctness of their caching strategy often enough to spoil any benefit a "browser-side" cache might bring.

And REST?

My advice to developers is to ignore it.

When you say "why develop your own caching strategy", I say, because you have to anyway, otherwise you're leaving clients in the dust.

Why implement authentication? Because you have to anyway: Passwords/basic authentication are insecure, and everything else requires so much attention that you don't want decades of deformities around HTTP implementations getting in the way.

Error protocols? The error protocol is how you deal with it: Do you retry? Pause and let the user fix something? Alert the operator? HTTP proxies are notorious for modifying error codes so no matter how hard you try to signal in the HTTP stream, you're still in for pain.


I feel sorry for the poor guy who has to deal with your ad-hoc, informally-specified, bug-ridden, slow implementation of half of HTTP 10 years down the line.


> slow

As the author of perhaps the fastest dynamic HTTP server, I find this part particularly funny.

> informally-specified

Oh and this part. Anyone who thinks HTTP is well-specified and correctly implemented probably should be kept away from a computer.


> If you control the client, then there's absolutely no reason to use HTTP:

What about ensuring that your desired ports are opened and your connection won't be eaten by a random firewall because you happened to pick a closed port and the user doesn't want to be bothered with firewalls and your application suddenly is crap because it can't even get its network connection to work when every other alternative works out of the box?


What about it?

That's not a problem home-users have: They NAT.

Meanwhile in the enterprise, 80/443 is frequently locked into a particular (cheap) HTTP proxy server that can't manage basic features of HTTP correctly.


> That's not a problem home-users have:

Actually, it is.

Furthermore, I'm assuming you don't expect applications to work only in the confines of a lan.

There are strong reasons why HTTP is highly favoured for communication protocols. Heck, whole industry protocols have been scrapped in favour of HTTP-based ones for this problem alone. See RTP and why it has been abandoned by HTTP-based protocols such as MPEG-DASH, and why the industry made it their point to develop a whole new video streaming protocol based on HTTP.


> Actually, it is.

I'm saying almost all home-users have NAT and can communicate to remote services on most high ports.

If you're claiming otherwise, you should be able to provide some evidence to back that up.

> There are strong reasons why HTTP is highly favoured for communication protocols.

Yes. Like where they don't control the client, as in the part right after the part you quoted.

What exactly is your point?


>I'm saying almost all home-users have NAT and can communicate to remote services on most high ports.

Sorry, that's patently false and flies in the face of reality.

>If you're claiming otherwise, you should be able to provide some evidence to back that up.

You can start here:

https://news.ycombinator.com/item?id=17569715

> Yes. Like where they don't control the client

That is patently false. The problems isn't caused by not controlling the client. The primary problem is that they don't control the network, which means each and every single node between client and server, and HTTP reduces or eliminates the chance that one of those nodes doesn't cooperate.


While I agree that we should not reinvent the wheel, caching data is often tricky with REST. In my experience, the caching aspect of HTTP can introduce unexpected behaviour into applications. To get round this, I see a lot of devs use POST requests for everything, and implement their own custom caching layer. Basically, I think HTTP caching is fine for files, but once you start working with complex data, things get cumbersome rather quickly.


For caching to work, you need to sit down—ideally with business stakeholders—and decide what the caching policy should be for each data element. For instance, in a healthcare domain, your core patient demographics—name, DOB, ethnicity, gender...—can likely have a fairly long TTL (perhaps even a day or more), since those attributes don't tend to change very often. However, something like a patient's list of upcoming appointments should have a very short TTL, since it changes more frequently.

Once you've made those decisions from a business perspective, you can have your services send the HTTP headers to effect that caching behavior.


Except for regulatory reasons you will not be able to cache any of those fields you mentioned using standard HTTP infrastructure...

You also probably should not let the browser cache it either (you’ll typically get dinged for it in pentests because your app may be used on a shared computer).


You may not be able to present cacheable data to your end-users, but you can still utilize caching for internal consumers (such as application-layer services consuming data from enterprise-layer services).

Or, if you have a CDN, you could cache closer to the end-user (assuming you have the necessary controls on your CDN to meet regulatory [and common-sense] requirements).


Yep, caching in generally totally makes sense where possible and agree re: services and CDNs.

But end-to-end cacheability at the HTTP layer, part of the original rationale behind the REST API design style, does not make sense for most APIs anymore in 2018.


> But end-to-end cacheability at the HTTP layer, part of the original rationale behind the REST API design style, does not make sense for most APIs anymore in 2018.

Why not? What were the relevant changes between the 90s and 2018?


* Most APIs are authenticated.

* HTTPS has broken intermediate caches.

* Most responses are personalised or highly dynamic.

* Many responses contain PII or sensitive information and should not be cached.

* Many requests are rate-limited, billed or metered.

Observation: people go out of their way to cache static resources but tend to disable caching entirely on their JSON APIs.

I do have some bias; recovering pentester who rarely worked with APIs that served the same response to all users.


There are still many use-cases for HTTP caches within internal infrastructure—such as between microservices.


This depends on downstream caches respecting your TTLs. And, as Java has shown with their DNS implementation, that's not a reasonable expectation to hold.

You can force some of it by putting everything behind TLS, but even then there will be downstream caches for some of your customers (especially the enterprise customers with their sweet, sweet enterprise money) which will still interfere with your expected caching.


What the actual problem with caching is, is personalisation. This has nothig to do with REST.


> I've found useful for myself is that REST is about operating on the HTTP layer instead of building your own layers on top of it

That's just using HTTP.


I think people are missing the parent's point. HTTP is not just some random aspect of delivering your content. It's an integral architectural aspect of the service.

If you're doing caching, you want to do that throughout the network (via proxy) so you can have tiered control of traffic flow. If you're going to block access to networks, you want to do that at a firewall so you can do some access control independent of your API endpoints. If you're designing your API, you want your HTTP method to reflect the kind of operation happening on the server, so you can categorize traffic by method when you're looking at service metrics.

These aren't just "using HTTP well", they're deep architectural decisions. They deal with HTTP, but you make them not just because you're trying to use some transport protocol the right way, but because it improves your quality of service.


I'm mostly comfortable with designing not-the-worst REST-inspired APIs when it comes to paths, query params and post/put/returned bodies.

Where I do have trouble is specifically with error codes. There never seems to be the right one. Returning any frequency of 5xx error codes could give any load-balancer a reason to remove the instance and replace it. Returning far too many 400 codes makes many errors difficult to distinguish and often the cause is shared between the client and server and could succeed later, so would be inappropriate.

The main source of difficulty is where codes describe conditions rather than subsequent actions. It's better to think of 429 as the client shouldn't reissue the same request without backoff. This handles both rate-limited requests as well as a server being overloaded. In this case all is well. Status 400 tells the client not to reissue the same query--it's not clear if that only refers to the lexical structure of the query/body or also the server state, in which case 409 may make more sense. I don't have a clear understanding of what other codes mean to clients or servers.

When should one use 410 Gone:

"The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed."

These codes were created for HTML and not REST in mind so we'll have to make do, but don't think they're complete or perfect as-is. New status codes keep getting created with varying adoption.


The main problem with REST is Roy Fielding's communication style.

Here's a quote from a 2008 blog post:

Apparently, I use words with too many syllables when comparing design trade-offs for network-based applications. I use too many general concepts, like hypertext, to describe REST instead of sticking to a concrete example, like HTML. I am supposed to tell them what they need to do, not how to think of the problem space. A few people even complained that my dissertation is too hard to read. Imagine that!

Yeah, Roy. It's called being an effective communicator. Effective communicators include stories and concrete examples to help bring the audience along with them, instead of lengthy discussions of abstract concepts. Readers shouldn't have to perform exegesis on your slideshow and blog to get even a vague understanding of what you're trying to get at.

To be clear, any difficulty with REST in practice is a direct result of an underspecified concept without a clearly defined RFC as it relates to HTTP. Fielding didn't define anything about REST in terms of HTTP. All of REST, as it relates to HTTP, has been an application of what other people think he meant.

Nowhere in Fielding's 2002 academic paper[0], his 2008 talk at ApacheCon[1], or his blog[2] does Fielding have clear examples or thoughts on the kinds of specific details that Web developers often struggle with.

Fielding, in my estimation, seems principally concerned with three things with regard to REST:

1) Every important resource has its own URI.

2) All interactions against that URI should be stateless and cacheable: all application state that is specific to the user should live on the client (e.g., local storage for browsers), and all shared state should live on the server.

3) Interaction with that URI should be the same regardless of client (browser, mobile app, etc.).

Everything else is left as an exercise for the reader.

[0] https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

[1] https://www.slideshare.net/royfielding/a-little-rest-and-rel...

[2] https://roy.gbiv.com/untangled/category/web-architecture


> Readers shouldn't have to perform exegesis on your slideshow and blog to get even a vague understanding of what you're trying to get at.

(Ironically?) I've found this same problem to be one of the best reasons to avoid WSDL/SOAP. Too much pie-in-the-sky waffle, not enough pragmatic real-life flavour.


What's esoteric about WSDL? Arguably, it's verbose and a misuse of markup languages and the WS-* specs might be too generic but WSDL at its core is a straightforward representation of an RPC or REST protocol.


From my little time in WSDL/SOAP land: unending boilerplate and busywork, and a constant sense of just get out of my way.

I had to study it for quite some time before I had any idea what any of it was actually for. It was clearly designed with the mindset that it is more important to theoretically support anything imaginable, than to be pragmatic and helpful to today's developers. To put that in buzzword terms, it fails to honour the principles of YAGNI and KISS.

It's very much a product of the ivory tower, and there's a reason it's been almost completely abandoned in favour of JSON/REST.


> there's a reason it's been almost completely abandoned in favour of JSON/REST

That's kindof ironic to say in a thread about the endless debate about doing REST the proper way isn't it? SOA(P) is really about defining a service in terms of its external interactions in language-neutral descriptions for big applications with long-term maintainability. It's not about developer convenience.

All REST applications I've encountered try to shoehorn state changes into HTTP PUT requests and would have greatly benefitted by expressing these as explicit state transitions in a WSDL-like RPC model. None has made the slightest attempt to represent state via HATEOS and advertise interactions via hyperlinks, which is the entire point of REST and loose coupling. In typical server-side languages other than JS/node.js, JSON isn't a native object serialization format and must be produced/consumed exactly like XML (eg. by using binding annotations or similar). So I'm not sure what has been gained, other than a mess of JSON-over-http services without transaction boundaries for the next generation to clean up.


JSON is significantly superior to XML for one simple reason: edge-labeled graphs are isomorphic with common programming language data structures and node-labeled trees are not.

I've not used binding annotations when working with server-side JSON in Ruby or Python, and there's little need to do it in Java unless you want a mechanized description of your API data structures to do other things with, like documentation or tool-driven schema validation.

HATEOS is nuts for an API layer. It forces an overly chatty API because you need to navigate a graph to perform actions, rather than looking up paths in docs - but the paths don't go away, they just move from being path templates to resources to being path of navigation through a series of responses. The essential complexity is constant, and the need for hard-coding endpoint knowledge into the API client hasn't gone away, it just moved.

HATEOS makes some sense for user interaction, but even then it works poorly for people who fork their navigation: open up several child pages for a list of items, add each to the basket, then go to the checkout. Do they all go in the same basket, or do you have several different baskets depending on the path of navigation? HATEOS will guide you to the latter, because hypertext drives the state, rather than some hidden state. HATEOS works much better in a read-only context, where you're e.g. sorting and filtering through a list. Actions, less so.


Your points are appreciated. I think your graph-theoretical interpretation of the XML-vs-JSON debate, though, is missing the point that SGML/XML describes an information serialization using a grammar formalism (regular tree languages), as opposed to JSON which is essentially described by co-inductive type theorems. Though JSON is lacking because it can only represent trees rather than more general graphs, and because of JavaScript's very weak type system (eg. wrt. primitive types as well as it's lack of a natural type system for JSON compound types, hence requiring kludges such as JSON schema).


> It's called being an effective communicator. […] bring the audience along […] Readers shouldn't have to perform exegesis

He's optimising for communicating with his peers: scientists and academic researchers.

If you find this communication hard to digest, then chances are you are not part of the intended audience. It is then mostly pointless to complain or try to change his wording.

You can buy books from other people who deliberately target developers.

https://oreilly.com/catalog/9780596529260https://oreilly.com/catalog/9780596801694https://restinpractice.com/http://www.designinghypermediaapis.com/


That's because they have generations of narrative dialog that is only intended to be used in academia. They use superfluous and oftentimes nonsensical language because they are attempting to create an environment of pseudo intellect, where none exists.


Is this parody?


Scientists and academic researchers don't benefit from examples? :)


> Yeah, Roy. It's called being an effective communicator.

An author uses technical jargon and concepts familiar to his intended audience. Fielding's intended audience for his thesis is other people with PhDs. His writing was perfectly cogent if you have this level of knowledge.

The people who really failed REST are the blog authors who didn't have the necessary background to evaluate the REST constraints and started talking about REST despite not understanding half of the thesis. This is what diluted the term.

> Fielding, in my estimation, seems principally concerned with three things with regard to REST

You missed the central importance of hypermedia.

> Everything else is left as an exercise for the reader.

That's like saying an RPC library leaves designing and building your specific API up to the reader. Of course it does, it has no business imposing any more constraints than are needed to fulfill its purpose.

REST specifies some architectural constraints that entail certain distribution and scaling properties, that's its purpose. Anything beyond that is domain-specific logic.


> An author uses technical jargon and concepts familiar to his intended audience. Fielding's intended audience for his thesis is other people with PhDs.

Perhaps, yet his blog is written in exactly the same style.

Who are all these ComSci researchers who are most comfortable with these concepts and distinctions, anyway, and who are also designing web APIs? At some point, if you wish any real system to use your ideas, you need to communicate it in some way that explains it for a typical software developer. If you write a blog in the style of a thesis, only thesis committee members will ever want to read it, or be able to understand it.

It's like the old story of the plumber who discovered hydrochloric acid clears clogs, and when he reported it, some scientists told him "The efficacy of hydrochloric acid is indisputable, but the corrosive residue is incompatible with metallic permanence", and he thanked them. Repeat several more exchanges, with even more jargon. Finally, they tell him "Don't use hydrochloric acid. It eats hell out of the pipes."


> Who are all these ComSci researchers who are most comfortable with these concepts and distinctions, anyway, and who are also designing web APIs?

More than you realize, I'm sure. There are plenty of industry people who can understand theses and other research publications. You can find them here, on reddit, on stackoverflow, and on lambda-the-ultimate.

> At some point, if you wish any real system to use your ideas, you need to communicate it in some way that explains it for a typical software developer.

That's what books and blogs are supposed to be for, but how many of these people do you think spoke with Fielding before writing about REST to ensure they got it right?

Fielding's emphasis on abstract concepts are because REST isn't tied to HTTP, or HTML, or XML, or JSON. He's trying to explain calculus to you so you can solve many problems, and you keep focusing only on how to calculate velocity given acceleration just because that's the problem in front of you right now.


Until AJAX happened around 2005, GET and POST were the only verbs you could actually use in a browser. Before then, REST was not a really a thing. It used to be that semantics of HTTP verbs had meaning in the context of e.g. caching and other middleware. But now that everybody uses TLS, having caching proxies acting as middlemen and interpreting http messages is not that common any more.

REST was a nice idea when it got (re)invented about ten years after Fielding's thesis but it left too much open for debate, ambiguity, and nitpicking and quickly degenerated in different opinionated camps trying to retrofit new ideas to the existing practices around http, which mainly boiled down to interesting ways to do RPC.

Before HTTP, people were already long doing RPC using e.g. DCOM, CORBA,RMI, etc. Reinveinting RPC over X has been the fate of pretty much every way we've come up with to make two computers talk to each other.


> Before then, REST was not a really a thing.

You only need GET and POST for REST.

> Reinveinting RPC over X has been the fate of pretty much every way we've come up with to make two computers talk to each other.

REST isn't just RPC, it's RPC with specific constraints to enable caching, scaling, adaptability, and distribution properties that unrestricted RPC alone cannot achieve.

That's why REST is still important today, because RPC is too flexible, and RPC without constraints will inevitably run head first into all of the issues that REST solves for you. Unrestricted RPC libraries then let you "solve" these problems in myriad incompatible ways each with their own problems.


> Unrestricted RPC libraries then let you "solve" these problems in myriad incompatible ways each with their own problems.

I mean, this is exactly the state of REST today. It's not specified well enough to admit general purpose clients (like, for example, GraphQL does) because it's just a "pattern" not a specification.

Because every API has its own way of doing paging, subresource expansion, expressing side-effectful operations, etc it means every API has a corresponding custom-built client.

> Caching

Do these custom clients implement http caching? Not usually.

> Scaling

Arguably, REST is responsible for bringing about the rise of stateless API servers. We should be grateful for the good idea, and not feel compelled to adhere to the other aspects of REST if they don't suit our purposes.

> adaptability

Claims of additional adaptability in REST are usually talking about HATEOAS where each resource defines how it can be interacted with by providing links and semantic metadata etc. I will flat out say it: this doesn't work. Short of building some intelligent agent into your API client, hypertext will never be the engine of application state in any meaningful way. You can't just change the links and have the client magically discover the right thing. Fielding's thesis was talking about the web, where humans are the ones deciding which links to follow. It's a useless concept in API design, and it adds tremendous overhead for zero benefit.


You need a client that understands the media type it's retrieving, yes. That's the same for all clients, using HATEOAS or not.


> I mean, this is exactly the state of REST today. It's not specified well enough to admit general purpose clients (like, for example, GraphQL does) because it's just a "pattern" not a specification.

I don't think that's true. It's very well specified, it's just an extensible specification admitting many possible hypermedia formats. There is no one single hypermedia format to rule them all. Fielding's thesis was about the architecture that allowed the web to scale, and content negotiation and extensible content types are part of that.

> Because every API has its own way of doing paging, subresource expansion, expressing side-effectful operations, etc it means every API has a corresponding custom-built client.

This isn't necessary, depending on what you mean by "client". All such operations should be encapsulated as links embedded within a hypermedia format. The client needs only to know the high-level workflow or path to navigate the hypermedia and extract the links it needs.

The hypermedia format itself can change, the links can change, but as long as the path through the hypermedia is preserved, the client works. And there are ways to increase adaptability even further if you don't want to hardcode a path through a set of hypermedia documents, like a flat directory of endpoints (but it's similar to the path, since there must be some shared semantic knowledge about what you're looking for).

> Short of building some intelligent agent into your API client, hypertext will never be the engine of application state in any meaningful way. You can't just change the links and have the client magically discover the right thing.

It sounds like you misunderstand the use of hypermedia for autonomous agents. Throw out HTML and consider a hypermedia format that has only two node types: a named embedded link, and non-link data. An autonomous program can then enumerate all links, or access specific named links it's interested in that may be deeply or shallowly embedded within a web of hypermedia documents.

Most endpoints will probably be shallowly embedded, like with PWAs where the JS simply does a form of RPC with the server. But if you consider more complicated aggregated services, like interacting with someone's bank account where the number of services is large and managed by separate teams (mortgage, stock broker, chequeing, etc.), then it starts to make sense to require deeper embeddings to permit better organization of your teams developing distinct programs on the server-side.

The point being that REST gives you considerably more server-side flexibility to do this.


> It sounds like you misunderstand the use of hypermedia for autonomous agents.

Every time you type hypermedia I want to punch you.


Exposure therapy is supposedly effective for anger problems, so let me help you out:

hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia, hypermedia!!


> Before HTTP, people were already long doing RPC using e.g. DCOM, CORBA,RMI, etc. Reinveinting RPC over X has been the fate of pretty much every way we've come up with to make two computers talk to each other.

The key difference between Representational State Transfer and RPC, which makes all of the other capabilities of REST possible, is:

REST deals with nouns and uses HTTP verbs (the common ones, or you can invent your own) to handle a specific and common grammar of operations against those verbs:

  POST   _hyn3 > /user # (C)reate
  GET    /user/_hyn3    # (R)ead
  PUT    /user/_hyn3    # (U)pdate
  GET    /users < _hyn3 #     List
  DELETE /user/_hyn3 # (D)elete

RPC was literally "remote procedure call", which means you're calling a remote verb and then specifying your noun as an argument or data:

  POST   _hyn3 > /users/create_user
  GET    /users/get_user/_hyn3
  POST   /users/update_user/_hyn3
  GET    /users/list_users < _hyn3
  POST   /users/delete_user/_hyn3
It's more than just semantics, because it gave a common language for almost all CRUD type operations. (Thus, even though REST was never about CRUD, as the titular article states, CRUD and REST do have a lot in common, because CRUD represents an extremely common data access pattern and common HTTP verbs, if not REST itself, map to it very cleanly and nicely.)

In other words: REST deals with HTTP and the facilities that it provides. It doesn't directly with the verbs behind HTTP, such as GET/PUT/DELETE/etc. It instead provides an architectural paradigm shift, in which the URL itself is a discrete representation of the data in question, and the verbs -- RPC's -- are just standardized ways of dealing with that data.

This seemingly simple change, obvious only in hindsight, set the stage for a massive change in how API's are produced and consumed.


The comments here show that there's still much confusion about REST being an architectural STYLE, not an actual architecture.

If you can present your domain as addressable representations of resources and if you can lay out your business logic as changes to these resources, then you could use REST via HTTP to provide access to your service.

Furthermore, if it is important to allow any client to connect to your service and you don't have any control over the clients used, then adopting HATEOAS provides a backward-compatible, future-proof way for client software to work with your service.

If, OTOH, you have full control over the client-side, you can trade-off these abilities for a simpler model (like GraphQL, if you must, or simple RPC over HTTP)

Know what you design for.

Re Caching: Caching in HTTP is very flexible. E-Tags basically give you fine-grained control over how to cache and allow for significant bandwidth/performance improvement. However, in many cases your app will also benefit from an additional cache layer in your persistence layer. Don't confuse them.


It seems to me that REST "was never meant to solve web developers problems" at first place and that dissertation wasn't addressed to them. I'm glad we are getting out of REST. Devs need protocols and specifications, not vague architectural ideas that leads to "you're doing it wrong" blog posts. There is no "you're doing it wrong" with GraphQL, or SOAP, or JSON-RPC, you either follow the spec or you don't.


> Devs need protocols and specifications, not vague > architectural ideas that leads to "you're doing it wrong" > blog posts.

This is precisely it. If an integration mechanism can't be implemented as a clear and uncomplicated library that strongly encourages correct use by making it the path of least resistance, it probably isn't any good.

(And note that I said "library" rather than "framework". This is important as libraries are more like a thin layer of glue while framework is more like a metastasising cancer)

Most of the time, REST, strictly speaking, demands too much attention even for relatively simple stuff, and therefore is not a good integration technology. However, some adaptations of REST are.

The one thing that is a bit sad is that there is some nomenclature confusion. These days a REST API doesn't refer to Fielding's vision, but a more practical adaptation of the mechanical bits of it.


OpenAPI (née Swagger) helps a lot with this. While I'm sure that many purists would argue that you're not really doing REST if you have an OpenAPI contract, it certainly helps a lot with making APIs more usable to consumers.


Both well-defined protocols and vaguer design/architectural principles have their place. But is it a great idea to take a bunch of such principles and bundle them up under a (confusing) name? Perhaps not.

It seems to me that the initial popularity of "REST" was largely because it wasn't SOAP (and didn't have to be XML), but it was still a name you could give what you were doing that made it sound like you were following some sort of standard.


I had zero issues with SOAP. I had zero issues with REST, although I couldn't quite understand the proposed benefit. When people started moving to JSON, I thought, okay, this is literally taking everything that already works and just doing it in an even more complicated way.

It's not hard, but I see very few differences between them. SOAP is easier to read/edit, JSON has less overhead/data, and REST...well, nobody could decide what REST was, so they just made it up or ignored it.


> I had zero issues with SOAP.

Just curious, what programming language(s) did you use to provide and/or consume SOAP APIs?


I'm not the OP but I've used Python with the suds library to consume SOAP APIs. I haven't had any problems with it other than companies deprecating their SOAP APIs just to reimplement exactly the same thing with JSON.


Ah, OK. I don't think suds existed when I first tried to access a SOAP API from Python in 2004. If it did exist, it was certainly obscure.

Perhaps my own experience trying to use SOAP can shed some light on why REST prevailed. In 2004, I tried to use the PayPal mass payment API from a Python application (in the now-defunct Quixote web framework). As I recall, I found one Python SOAP client library, but for some reason, I couldn't use it to access the PayPal API. It was all just so complicated. In desperation, I ended up writing a small service (micro-service?) to access the PayPal API using the official PayPal Java SDK. That service used Jython, because I just couldn't stand Java's verbosity. The main application communicated with the service using XML-RPC.

Compared to SOAP, when I discovered REST, it seemed refreshingly simple, built for the Web rather than awkwardly abusing HTTP as a transport for an over-complicated RPC protocol.


Quite a few over the years. .NET, Java mostly in the early days. Sometimes I would just write a parser in whatever language I needed.


I just use this https://labs.omniti.com/labs/jsend and typed it up via TS.

It makes for nice code.

    Promise<JSendApiResponse>
With the benefit of HTTP level stuff not been mapped onto application level stuff.

I like KISS with this kind of thing and if you need a more complex solution it handles that as a starting point.


You can do this with Axios too:

  interface IResponse {
    foo: string
  }
  
  async function doSomething() {
    const reponse: AxiosResponse<IResponse> = await Api.call()
    ...
    // or you can use AxiosPromise<IResponse> without the await
  }


> There is no "you're doing it wrong" with GraphQL, or SOAP, or JSON-RPC,

Where there's a will, there's a way to 'do it wrong'. It's amazing the kinds of messes people can (or are forced to) dream up in this world... even if they're ostensibly playing by rules that aim to make messes 'impossible'.


After using GraphQL for over a year now I rarely miss the good old REST days. Certainly the declaration of schemas/models in the beginning is maybe more cumbersome compared to REST. But when that is in place is just so easy to work with, no more adding extra field in responses to avoid doing multiple unnecessary queries, just lovely.

When mutating state with GraphQL the suggested approach seems to be to have "action based" endpoints, i.e "publishArticle", which resonates well with me.


My biggest issue with GraphQL is the fact that what essentially used to be a database operation has now become a (coordinated) batch API query. So the operation has been split into microservices, then re-joined using a GraphQL interpretation SPoF, all so that client business logic can be encapsulated as a query string, instead of securing that effort as a specialised API endpoint.

Moving flexibility to the client comes with all the inherent risk of that flexibility being available to (in a browser sense) a risky client.

RPC over REST is a solved problem IMO, arguments about REST RPC semantics become ideological after a while.


> RPC over REST is a solved problem

I'm interested in understanding more about this statement.

Do you mean that all REST RPC libraries are sufficiently developed such that you don't really need to worry about RPC being tricky, or that we all just as a community understand the best strategies and common pitfalls. Or do you mean a specific technology?

Just wondering cause whenever I hear the term RPC as an experience developer I just shudder (-: Thanks!


I meant that there are well discussed strategies for handling calls that are RPC-oriented rather than CRUD, not that there are REST libraries that have solved the problem.

Ultimately we find ourselves building 80-90% CRUD operations with edge cases that don’t entirely fit with the straightforward RESTful toolset, and in those situations making a call on how to handle it in code is a topic that has by now been more or less discussed to death and becomes a stylistic choice.

REST isn’t a protocol, there is definitely interpretation at work.

I was using the term “RPC” to cover topics that are outside the scope of CRUD, “do a thing” as opposed to “manipulate a thing”.


Thanks!

My own personal take on RPC being solved is that we're all pretty much agreed it's not a good idea unless you absolutely have to!

(I mean RPC in it's more narrow definition as remote, synchronous function calls - e.g. CORBA, SOAP, XML-RPC or the original Unix RPC protocol - for me this is what makes the resource oriented approach of REST a much more appealing approach)


> My own personal take on RPC being solved is that we're all pretty much agreed it's not a good idea unless you absolutely have to!

I'm very much of the opposite opinion — Ideally services are indistinguishable from any other library, and the fact that they're running remotely instead of locally only means there's a couple more (networking-related) error cases to handle. RPC setups like Thrift or gRPC support this way of thinking relatively well, whereas REST's resource/verb paradigm really forces your APIs to have a certain shape.


> Ideally services are indistinguishable from any other library

I don't mean to be dismissive [0] - but that mindset seems a bit prehistoric to me. We tried CORBA, everybody hated it, and it died!

Indeed, the bible on the topic [1] devotes substantial amounts of time to how to structure your interfaces just so that you avoid issues arising from the discrete system boundaries.

There is a world of difference between issuing an instruction that repositions a program pointer from one position in memory to another, and transferring execution from one compute node to another while artificially blocking execution to simulate a single thread of execution.

(Even, on a single machine with multiple cores this can be a problem - though to a much more limited degree)

This is why we prefer loosely coupled, message-passing approaches, and resource-oriented ones such as REST. This is why modern languages have "futures" and "promises" (the implication being that we sometimes break promises) and why functional languages are so much in vogue (specifying "what" and leaving a substrate more sympathetic to the underlying architecture to decide "how").

[0] https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...

[1] https://www.pearson.com/us/higher-education/program/Henning-...


> RPC over REST is a solved problem IMO, arguments about REST RPC semantics become ideological after a while.

"RPC over REST" is oxymoronic.


Where do you spend your time? On the backend, front-end or both?

On the front-end, what are your thoughts around managing data in the in-memory store (Apollo Client)?


I worked mostly on setting up our backend, and more recently dived into the frontend. We use Apollo in our frontend (with React), and I quite like it. We usually wrap stuff in high order component constructs, and for our needs the caching things work great. The ability to modify the cache manually during mutations are really handy to ensure that changes reflect locally (without needing unnecessary refetches).


Despite the architectural principals, it's named REST--REpresentational State Transfer. It's really hard to twist and contrive a situation in which "REpresentational State" is not CRUD. It can be done, but it's awkward. As soon as you're doing non-CRUD operations, it's really hard to model them as representations of state, and it's sometimes very unpleasant to use APIs where the designers tried too hard to make the peg fit.


I disagree with the article and the examples provided.

The author has left out a few key components of REST in their initial distillation. Critically:

> REST components perform actions on a resource by using a representation to capture the current or intended state of that resource and transferring that representation between components.

So given the example from the article:

    POST /articles/{articleId}/approve
This URL does not point to a resource, and the HTTP verb does not describe what action is being taken upon a resource. It's also not using the representation of a resource to capture intent.

It's also worth noting that a GET for the article would return the "status" field and presumably other PUT requests to update the article would either act directly on the article or have other vanity action URLs, making the API more verbose than if everything were done properly.

A RESTful design would be in essence CRUD. It would have a url (/articles/{articleId}) identifying a resource, it would have a representation of a resource that includes a "status" field, and changing the status of the article would involve a PUT request to the URL of the article with the desired change of state.


It's a "request to approve an article" resource, even if it hasn't explicitly been modelled that way.


By that logic, SOAP is RESTful.


REST was never that good to begin with.

(Not even as originally intended, which is very different from RESTful way 99% use it -- originally REST was all about self-reporting and discoverability, which is not used, and irrelevant to most use cases).

Having JSON as the response, and a scheme to address entities, was good. Even using HTTP action semantics was OK (though debatable). But that's far from what REST is about.

Something like JSON-RPC would be a much better for web API needs.


I'd argue that GraphQL is starting to fulfill the conceptual promise of REST: A universal, discoverable, self-describing API protocol. Every REST implementation ends up being ad hoc and not interoperable. With GraphQL you can point a documentation tool at a live server and get reference documentation for that API, for example. You get a schema, the ability to join collections of data together, parameterized queries, and more.


Are we just turning our application servers into database servers?


Yes. When we write carefully crafted services that manipulate data based on a set of permissions and rules that make up what Twitter (for example) is, you're basically doing similar things that Postgres is doing when it writes data, it is looking at it's own set of permissions, etc..

So yes that's the exact analogy. We're writing databases on top of databases. Only ours are no-SQL.


GraphQL decouples the API and the database entirely. You can implement graphql as a direct DB access, but it's certainly not the intent. Quite the opposite really, it was designed in not small part to provide the ability to aggregate and re-organise data from multiple sources in a single coherent API.


You are missing the point.

GraphQL is a relational query language, just like SQL but only less powerful.

Fact is that people always wanted a just database anyway. But with some graphical interface on the front. Once we moved the interface to the client in an SPA, it seems unavoidable that backend will turn database-like.


Could we not just move the sql interpreter higher up the stack? Implement tables, views, functions as application code and use sql from the front end?


> originally REST was all about self-reporting and discoverability

I don't think this is accurate. Discoverability was the bonus score piece, the most fundamental parts were statelessness and caching.

A lot of grievances in early web development came from state breaking when people browsed back, reloaded pages, did other things between requests, or got hit by pretty aggressive caching.

Nothing intrinsically REST-blocking in the original zero JavaScript, form-based apps, they were just that messy and expected people to follow a linear path on a single browser window.

People where solving these problems for the first time (for the web), and they actually needed to be told to care for state.


That people misused REST is not what made REST "never that good" (in your opinion). In fact, it was this misuse that frustrated Dr. Fielding more than anything else.


> That people misused REST is not what made REST "never that good" (in your opinion).

I used to be partial to that interpretation, but after a few years of looking at it, at what people do and at what's actually valuable, I've come to the conclusion that coldtea is correct: REST as originally intended was not that good for programmatic contexts.

It certainly didn't help that it got quickly buzzwordified as "actually use HTTP properly instead of just shoving your garbage through POST all the time" (the original sin of SOAP or XML-RPC), but even then what makes "REST" work for browsers (which is what the concept was extracted from) is that there is an intelligent agent doing the discovery and driving the interaction.

That's not useful for a programmatic API. You can add a discovery layer to a programmatic API (as e.g. graphql schemas & explorer tools) but requiring programmatic clients to go through such a discovery layer just makes interacting with the API painful for no value.

Traversing a few pages when clicking around in your browser makes sense, doing so from a programmatic client when you could literally just store the correct endpoint and perform your call directly is nonsense: it's inefficient, it's annoying and it's pointlessly verbose. Yet that is essentially what proper rest as described and clarified[0] by Fielding require.

Even the "multiple interpretations" (multiple content types) option is worthless when browsers don't provide the flexibility required to do content negotiation[1] or leverage HTTP verbs beyond GET and POST.

[0] https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

[1] even in places where they allow actual mimetypes it's as likely as not they'll ignore it e.g. object/@type is a mimetype but no browser I know cares about it, most of them send Accept: ∗/∗ and firefox sends its pagewise/default Accept.


> That's not useful for a programmatic API. You can add a discovery layer to a programmatic API

A discovery layer isn't necessary if you're using hypermedia as intended. "Discovery" just means a loosely coupled format associates the "logical" addresses, which your program shouldn't know a priori, with a readable name, which your program uses to find the logical addresses it actually needs.

It's like the difference between programming in assembly language with direct memory addresses (URLs), vs using a programming language that abstracts memory addresses with named variables (hypermedia format that associates readable names to URLs), thus decoupling clients from needing to understand potentially opaque URL formats.

The real problem with REST is that REST frameworks don't provide any guidance or support for any hypermedia format that isn't HTML, and HTML isn't well suited to programmatic interaction. None of the commonly used data interchange formats support distinguishing embedded links from other data, so they aren't hypermedia formats. No wonder doing REST right is so difficult.

> doing so from a programmatic client when you could literally just store the correct endpoint and perform your call directly is nonsense: it's inefficient, it's annoying and it's pointlessly verbose.

Except it's not pointless, it allows painless server-side upgrades. It might cost you some efficiency, but it provides significant flexibility.


> Traversing a few pages when clicking around in your browser makes sense, doing so from a programmatic client when you could literally just store the correct endpoint and perform your call directly is nonsense:

That requires implicit knowledge of the 'correct' endpoint. If that endpoint changes or anything in between, you'll have to start versioning your API, which basically tells all existing clients: Sorry, you got to update.

Allowing clients to discover resources is a future-proof way to keep your client libraries compatible and evolve your API without versioning. It is more work for the client and it is not always suitable. But if you intend to design something future-proof (and we are talking about 10+ years), then REST has already proven to be very successful (your latest browser can still browse a web page from 15 years ago...)


> That requires implicit knowledge of the 'correct' endpoint. If that endpoint changes or anything in between, you'll have to start versioning your API, which basically tells all existing clients: Sorry, you got to update.

In exactly the same way that the "RESTful" version requires implicit knowledge of the content types: that knowledge it not implicit at all, it is specifically and explicitly embedded in the system.

The word you're looking for is static.

> Allowing clients to discover resources is a future-proof way to keep your client libraries compatible and evolve your API without versioning.

That is completely untrue.

> But if you intend to design something future-proof (and we are talking about 10+ years), then REST has already proven to be very successful (your latest browser can still browse a web page from 15 years ago…)

Because it has a somewhat intelligent and flexible human driving it and able to adapt in changes to the visited targets. Not so with a programmatic client.


If users misuse a UI then the UI wasn’t designed well. If developers misuse REST then it’s their fault for not understanding REST.


Exactly. And also: a UI could be badly designed even if its users don't misuse it.

Just because users might be extra careful in their use of a UI, doesn't mean the UI is not badly designed itself.

A big red button that says "Click me for free lemonade" but instead burns the whole computer is bad design -- even if the users have read the manual, so they know what it does and don't press it.


>That people misused REST is not what made REST "never that good" (in your opinion).

You're absolutely correct.

It's not the misuse that made REST "never that good".

It was already not that good (again, in my opinion).


Agree. The whole point of rules is that you allowed to break them from time to time.

During development with Rails I struggled a lot, because it was forcing CRUD ideology in places where the custom route could work (and adding it was so painful).

Standardization is fine, but usually it does not solve development issue, it solves management issue, which is common for big orgs not small ones. Especially when it gets on hype or forced by the top management, like it happened numerous times, with latest examples of React, GraphQL or Microservices.


Hm, I never found adding custom routes to Rails painful - could you elaborate? Anyway, for smaller use cases I found Sinatra + ActiveRecord a great match.


Sinatra is awesome! Had a project recently where Active Record and Postgres with Sinatra made sense and it was completely painless.

Rails _might_ have helped, but I have come to enjoy Sinatra's flexibility. Been debating about making something in between, but working on other things.


I'll be frank, the latest Rails version I used was Rails 3, and it was like 6 years ago, so I can't really remember details, except that I had to spread logic to like 3 files to make it work. Maybe things changed.


No, you didn't have to spread out the logic like that in Rails3. You could do something like

    class AsController < ApplicationController
      def index
      end

      class BsController < ApplicationController
        def create
        end
      end
    end
with routes like

     resources :as do
       resources :bs, controller: "as/bs", only: :create
     end


Even that is resourceful routing, just with nested resources. Doing a flat out custom route is even easier:

    class AsController < ApplicationController
      def custom
      end
    end
routes:

    resources :as do
      member { get :custom }
    end


Yes, of course.

The whole point of my example was that it is possible to do resourceful routing in a single file without having to resort to using custom routes. I try to avoid custom routes whenever possible - in my experience they lead to pain in the long run, more often than not.


True, in general any framework that claims to help build REST APIs can put you in a situation where it forces assumptions on you.

Then the interpretation becomes the pattern, and it becomes a self-perpetuating fallacy.


> During development with Rails I struggled a lot, because it was forcing CRUD ideology ...

This reminds me a bit of my first experiences with Common Lisp to write anything resembling serious code. (A Java compiler for my compilers class in school.)

My education to that point had almost entirely been driven from a purely list based and functional view of the language. Instructors would default to lists at the expense of other data structures and strongly encourage 'non-destructive' operations at the expense of mutation. (Even the term 'destructive' carries with it an obvious negative and somewhat dangerous connotation.)

Flash forward a couple semesters, and I'm trying to take this approach beyond writing simple unifiers and minimax search and apply it to a compiler with multiple layers of code that's being built up in phases over months. A month or so in, my resolve cracked, I started exploring some of the other options provided by the language (hash tables, vectors, mutation, etc.) and made a bunch more progress (and got something working by the end of the class).

My point in saying this isn't really to advocate for mutation, etc., but rather to agree with your point that sometimes it's worth breaking out of self-imposed constraints so that you can face the external constraints that are probably more important. (ie: my business client probably doesn't give a damn if my RESTful interface is RMM level 2 or three as long as it lets her effectively run her business.)

(One thing I'll add about the Lisp experience above is that it was more than half a lifetime ago... I'd probably be more nuanced with my use of mutation, etc. now than I was then. But then, I was much less experienced and a refugee from Pascal, etc. One of my constraints was my instructors' encouragements to avoid mutation... but another of my constraints was my own lack of techniques for working that way.)


Standardization is in place for interoperability issues, not management.

Rules are NOT made to be broken, as you claim, and I see more people getting themselves into trouble for thinking that.

This is computer science, not a game.


No, it's absolutely not "computer science".

In fact the rules you talk of are totally ad hoc and/or cargo cult. At best, they're self-reported empirical (pre-scientific) findings.

There's no systematic research that they're better than some alternative set of rules etc. People just adopt this or that framework or methodology (like REST) because some famous programmer wrote it or suggested it, some company promoted it, it picked up steam of GitHub and blogs, and so on.

That's not only not "computer science", it's not even "software engineering".


My main gripe with REST is that it's totally not future proof. The notion of RPC has been around forever. REST has basically completely replaced that. Http status codes and REST URLs are not future proof. We have already seen superior connection types (web sockets) come (and then basically go) that don't support the notion of URLs and (well they do for a second, but after that the original connection the URL is meaningless for the traffic flow). With Http2 we're seeing the return of lightweight connections (basically similar to websockets but have more graceful re-connection options). But http2 feels like a patch to me. I don't really want to bind the future of Apps to a bunch of URLs. I'd rather tie my app to a client that has a rich semantic set of objects and methods that pull down data, can update data, etc...

When we use REST as our layer to convey the API itself, we really lock ourselves down to a very specific set of URLs and HTTP Status codes. I'd rather have a rich error object thrown, not an HTTP status code that I have to examine.

I've had a pretty strong disdain for REST for the last few years and have been actively finding replacements. The closest I can come up with is Grpc. I'm dying waiting for Google to finish up Grpc-web however. If we continue to tie ourselves to a bunch of server resources by URL we're going to take forever to get off this archaic system that has no long term future.


Do you think the web could have been built with RPC instead, and be just as successful?


Hmmm, but is REST so often conflated with 'just' CRUD, because that is all people really want from REST, or is it that this perception is so pervasive that people don't know what the potential is. Chicken or Egg? I'm saying Egg, but I'm not saying which that relates to :)


CRUD is easy to understand and close to the core of tasks that programmers at large need to do.

Thinking in terms of REST requires that you concern yourself about the full infrastructure. Many programmers i have met simply do not care. Super nice properties such as cacheable and discoverable are outside their domain. Most simply just want remote RPC.

Try convincing a programmer of the benefits of HATEOAS is though. It will be dismissed as too much hassle with no real benefit.

But if you look at it from a data architectural perspective everything just "clicks". Few projects are however approached from that angle.


HATEOS can be a weighty concept to get across and it never clicked for me until I read a book on it. I still think there are aspects of it that could be learned easily enough in isolation though. Understanding media types for example allows you to develop some really nice schemes for API versioning. But few people are even willing to learn that and just butcher their URLs instead.


The draft BCP56 revision, on the use of HTTP as a substrate, is a nice primer on REST API design

https://tools.ietf.org/html/draft-ietf-httpbis-bcp56bis


The way I think of it is that GET, PUT, PATCH and DELETE all deal with data, whereas POST is a call to action.

It is a little unfortunate that we don't have a CREATE verb (distinct from the update-ness of PUT) and we need to use POST for this purpose.


I've had to support http clients that could only do GET and POST. So I tend to do everything by those two verbs, even though it's not academically correct.


That's interesting. Are you saying that you view Posts as verbs/processes which only result in side effects of the underlying system but do not change the state of data? I'm trying to understand what you would consider the domain of actions that would fall under your Post categorization. Batch Jobs, Emailing reports etc.?


That's basically right. I view GET/PUT/PATCH/DELETE/CREATE as simple wrappers around SQL commands, with validation and authorization checks.

POST is a call to do something with that data, which might include changing the data. Off the top of my head, logging in, actions with external systems, sending emails.


An idea that I've never heard expressed explicitly is how REST is basically distributed OOP. Both REST and OOP lead people to waste their time on unimportant questions.


REST is kind of a worthless term at this point because really nobody agrees on what it means.

I'm sure somebody will reply saying "no it clearly means this!" or "it obviously means using HTTP how it was supposed to be used" or "those other people just don't understand REST", but the fact is we still have articles - in 2018! - that are trying to explain exactly what "REST" is.


If people don't agree with what it means, it's because they aren't spending time reading the actual dissertation. Instead they are kind of glancing over it and depending on others interpretations to create their own. As someone who writes protocol specs, Roy takes great care in being as clear as possible: confusion, inconsistency and misinterpretation are not Worthy Things in protocol implementations. To truly understand REST, you MUST understand HTTP and the rationale behind it.


Perhaps he should have written a specification for REST.


Words are defined by their usage, not by what one person thinks, even if they came up with the word.


just because people don't understand doesn't mean everyone is wrong/right https://i.imgur.com/MCsxsAY.jpg


It means using the term "REST" for communication is useless - exactly like in that cartoon you linked.


I don't think anyone ever says RESTful services must be CRUD based, what they say is non-crud operations are hard to model in a RESTful service.


REST as seen in HTTP has always been about resources (identified by URLs), representations thereof that take the form of a particular mediatype (selected explicitly or by content negotiation), and hyperlinks (with targets and rels).

When resources are obvious nouns already present in the backend's data model, this maps well to CRUD. The problem comes when trying to describe complex mutations, because to most people, this feels wrong, instilled by years of Dos and Don'ts of half-baked lessons about Object Oriented design.

In all this time between when Roy's thesis was rediscovered to promote an architectural style and when the trendiness of REST has finally run its course, there were far too many posts complaining about deployments missing hyperlinks (when usually, the API was externally documented, URLs templated, and its clients hardcoded, making many usecases of embedded hyperlinks moot), and not enough to reassure flummoxed designers that naming resources after verbal nouns or deverbal nouns is perfectly fine. Oops.


The useful/valuable part of what came to be called "REST" (very little of which is found in the dissertation, it's an accident of terminology which has lead a lot of people astray) is all about CRUD, because CRUD is the only thing that makes sense to do with a generic "resource".


POST is used for way, way, way more than any CRUD operation.


Indeed it is, but at that point it's not "REST".


Why wouldn't it be?


The HTTP verb isn't reflecting what's happening, and likely some important considerations aren't being represented as entities.


The description of the POST method is very generic; one of its functions detailed in the HTTP spec is:

      - Providing a block of data, such as the result of submitting a
        form, to a data-handling process;
Which fits with much more than mere CRUD. And it's specifically written that it doesn't have to result in the creation of a new URI-addressable resource.

(Also, REST isn't HTTP, you can comply with the former even if you're violating the latter, and vice-versa)


> Also, REST isn't HTTP, you can comply with the former even if you're violating the latter, and vice-versa

Exactly. Complex processing via POSTs instead of distinct URI-addressable resources is HTTP-compliant but not REST-compliant.


I just don't see how it violates any of the REST constraints.


I think of REST imposing a certain grammar that HTTP alone doesn't necessarily require. Remember, REST is resource-driven.

- HTTP verbs are like normal verbs. They represent an action.

- URLs represent the location of a resource or collection of resources.

- Resources are like nouns. They represent what is being acted upon.

As soon as you start assigning different actions based on different URLs, the HTTP verb no longer actually represents an action and the URL no longer points to a resource.


If you make an analogy, and then the analogy breaks, I'm not sure that shows the flaw is in the original concept.

A RESTful architecture needs an Uniform Interface, yes, but their it's doesn't have to be an "action" or anything specific. POST has a definition, and everyone handling the requests can rely on it. That fulfills the constraint.


It's not really an analogy, though. If you have a POST request triggering some random sequence of actions, what's the resource? What representation of state is being transferred?

It's not REST, it's just HTTP.


The author proposes:

    POST /articles/{articleId}/submit
    POST /articles/{articleId}/approve
    POST /articles/{articleId}/decline
    POST /articles/{articleId}/publish
I find this troublesome because the URL path component then includes a verb, an action, so there are two verbs in the same sentence!

This can be adjusted to:

    POST /articles/{articleId}/submitted
    POST /articles/{articleId}/approved
    POST /articles/{articleId}/declined
    POST /articles/{articleId}/published
Now the URLs really represent resources on which HTTP verbs operate, and you can readily guess what GET (if needed/applicable) on each resource would give you.

There is some cognitive dissonance introduced by the inconsistency of abstraction levels between the former "RPC style" (where HTTP verbs are mere actuators† and the intent is in the URL and/or in the payload), and the latter "REST style" (where verbs carry intent acted upon resources).

EDIT: I do not advocate for one over the other, merely that an API should be consistent overall and not mix both.

† They are used as pure HTTP feature control (like caching) and do not carry application-level semantics.


I'd like to implement such and API, but I feel often time its not that we don't want to design an API that way, but that we fail to successfully implement one.

I'd like to hear about guidelines, like for that transaction workflow example, how would you implement this? What would be your data model?


I don't agree with the article (to me REST started about CRUD, and obviously got overloaded with time), but that's not what I need to comment.

That font in the titles could have been gorgeous, but that upper case "D" is offendingly bad.


Ouch! Take your point, but don't you think the feels you get from the lower case "about" in the title makes up for any failing in the "D"?


I've been a long time fan of what this article is saying - but I didn't know it was a thing. Now if I ever run into CRUDdy teams, I have something to share with them :)


Only one thing about this article i have to say is that its posted on TERRIBLY designed page... Half of the screen black, align to the right and no "reading view".


Harsh but fair!? We're listening, feedback has been passed to the team who look after https://tyk.io/.


The same old story with anything called "architecture patterns", you can look at them from different angles and each time you will have a new interpretation.


Half the page is a calendar widget?! This is the worst layout I've seen in a long time.


Harsh criticism Johnny! But we'll take it on board, feedback has been passed to the https://tyk.io/ team, there is certainly an opportunity to adjust proportions here. As a matter of interest, which blog/article page layout do you really rate?


I think I was that emphatic because I couldn't believe anyone would want to exhibit such an obviously flawed design in public until it was fixed. I'm still confused how this would make the light of day, but I guess esthetics and usability are something learned perhaps over time?


This is an oft revisited topic, and it might be worthwhile to cite a few paragraphs from Dr Fielding in order to support and expand upon the submitted article.

"Search my dissertation and you won’t find any mention of CRUD or POST." [0]

> For example, some APIs may need to support actions such as submit, approve, and decline. With our somewhat limited action verbs in HTTP, how can we add this to our API?

By representing the difference in states. Or otherwise. The communication medium is not the API. ("A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. " [1])

> POST /articles/{articleId}/publish , POST /articles/{articleId}/submit , ... etc.

A few years ago I was zealously advocating against representing actions in URIs, and using a generic /actions [sub]resource, but this actually makes it simpler to represent the available actions to the client. And sure, this can be thought of as REST via the code-on-demand part of the architecture. The server points to a script (it doesn't have to be JS, but the media type must be defined in the REST API), that can understand the available actions from the state representation, and construct the necessary representations for the desired actions. (So in a HTTP API, the JS interprets the JSON and knows how to put together POST requests to modify the resources, without the actions having been reified and URI-fied in the JSON.)

> The workflow our API supports is more explicit, [...]

Sure, that's pragmatic, but after much gnashing of teeth and scratching of heads, REST is about presenting options to the client by offering various media types (content-types), and letting the client request the best representation, and not really about a fixed ACL neither about fixed flow.

From [1] again: "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]"

[0] https://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post

[1] https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


> "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types).

Arguably this is the problem: what Fielding is describing here has a well-known name: Object Oriented Analysis and Design. It's not at all clear why a new formalization was needed when we already have Objects. Resources have URIs, Objects have memory addresses. Resources have representations that are defined by content media types, Objects have behavior that are defined by -- types.

> POST /articles/{articleId}/publish , POST /articles/{articleId}/submit , ... etc.

Good ol' OOAD can avoid nonsense like this. It's immediately clear that the Resources identified by these URIs aren't real objects. If you want to talk Published Articles vs Submitted Articles you would introduce new types and new repositories.

The only way this makes any sense is:

POST /articles/published/ POST /articles/submitted/

Only now does 'GET /articles/published/{articleID}' make sense and we might think about what that content media type looks like vs submitted articles.


It looks like this is a case of the stereotypical academization. Fielding looked at the Web, and abstracted it, and calls it REST Architecture.

And of course, the problem is that nobody sits down to just do a quick World Wide Web API for an app, or an internal corporate business-as-usual thingie. In fact since the original hypertext project at CERN no one every did it. We still use the same Web, incrementally transformed into what it is today, but still the same HTML, same Content-Types, and HTML1.0-like markup. All eaten and excreted by JavaScript of course, but that's a prime example of something incremental, and not exactly thought out.

Moreover, there's not much to say about media types driving interaction, because 99.9% of the Web is GET GET GET. And OPTIONS (due CORS) is driven by out-of-band conventions (defined in the Fetch and the amended XHR APIs, but not much to do with media types - but of course, CORS is more about fixing the leaky encapsulations of the communications interface/layer, not about the content [the high level state being transferred]).


> Fielding looked at the Web, and abstracted it, and calls it REST Architecture.

yeah the perfect rest protocol is the web and the perfect rest clients are the humans that navigate the action represented on the hypermedia.

that said, there's some value to be had in discoverability, even if rest grossly gloss over how to discover parameters for the action that the state holds.


> that said, there's some value to be had in discoverability, even if rest grossly gloss over how to discover parameters for the action that the state holds.

There absolutely is and API systems like graphql have taken that to heart (playing with a graphql explorer is delightful).

But forcing all programmatic interactions through a "discoverability" layer is not useful, yet that's essentially what REST (as originally formalised) mandates.


I'm pretty sure that if I call `POST /articles/{articleId}/publish`, it will publish the article with ID `{articleId}`. Whereas I'm not sure how `/articles/published/` works.


That's actually a REST feature. You have to interact with /articles/published to get its media type and then based on that you will know what you can do with it. The URI shouldn't matter. It could be /4234234/224234324/020121020/xxxxAAAAzzz but if the media type is "published articles" which is defined in the API spec, then you will know what to do with it.


>>how can we add this to our API?

>By representing the difference in states.

Maybe I'm misinterpreting something here, but doesn't this directly contradict the following?

"A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC’s functional coupling]."

How do you implement the above without making out-of-band assumptions about the name/structure of each action state (e.g. /publish, /submit)? The server-provided links won't help you there, since they might as well be called /foo and /bar - after all, the server provides them and the client shouldn't have any prior knowledge about them.

I mean, sure you would associate each action state (/publish, /submit) with a media type, but that's just shifting the assumptions elsewhere, isn't it?


> advocating against representing actions in URIs

That is a lot of thinking for something that matters very little in REST. URI for actions are embedded in the response itself anyway. If you aren't using the actions as provided by the response, you're not doing REST.


Many years ago, Dr. Fielding would visit a forum or two and offer answers to questions about REST. I had a number of my questions answered directly by him which helped me through the years to discover how many people have no clue about what REST is and how to use it but wave the REST flag as if they did--and they don't.

I believe Roy quit visiting forums for those reasons; a frustration dealing with all that.


Words are not defined by their creators, they are defined by their usage.

Those people indeed know what REST is -- in fact, they define what REST is. When we talk about REST, we talk about how people use it.

They might not know what "REST as defined by Dr. Fielding is" but that's not really relevant to them or to the industry at large.


"use", not "usage". </joke>


Not according to my dictionary:

use: the action of using something or the state of being used for a purpose.

usage: the action of using something or the fact of being used: a survey of water usage


Words are not defined by their creators, they are defined by their use.

Those people indeed know what USAGE is -- in fact, they define what USAGE is. When we talk about USAGE, we talk about how people use it.

They might not know what "USAGE as defined by coldtea is" but that's not really relevant to them or to the population at large.

:-)


I'd love to read some of his original comments. Do you remember any of the threads? Are those forums publicly available?


The forum was still around last year. I'll spend a few minutes to try and find it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: