rest - What can go wrong if we do NOT follow RESTful best practices? -
tl;dr : scroll down last paragraph.
there lot of talk best practices when defining restful apis: http methods support, http method use in each case, http status code return, when pass parameters in query string vs. in path vs. in content body vs. in headers, how versioning, result set limiting, pagination, etc.
if determined make use of best practices, there lots of questions , answers out there best practice doing given thing. unfortunately, there appears no question (nor answer) why use best practices in first place.
most of best practice guidelines direct developers follow principle of least surprise, which, under normal circumstances, enough reason follow them. unfortunately, rest-over-http capricious standard, best practices of impossible implement without becoming intimately involved it, , drawback of intimate involvement tend end application being tightly bound particular transport mechanism. so, people (like me) debating whether benefit of "least surprise" justifies drawback of littering application rest-over-http concerns.
a different approach examined alternative best practices suggests our involvement http should limited bare minimum necessary in order application-defined payload point point b. according approach, use single rest entry point url in entire application, never use http method other http post, never return http status code other http 200 ok, , never pass parameter in way other within application-specific payload of request. request either fail delivered, in case responsibility of web server return "http 404 not found" client, or delivered, in case delivery of request "http 200 ok" far transport protocol concerned, , else might go wrong point on exclusively application concern, , none of transport protocol's business. obviously, approach kind of saying "let me show stick best practices".
now, there other voices things not simple, , if not follow restful best practices, things break.
the story goes example, in event of unauthorized access, should return actual "http 401 unauthorized" (instead of successful response containing json-serialized unauthorizedexception) because upon receiving 401, browser prompt user of credentials. of course not hold water, because rest requests not issued browsers being used human users.
another, more sophisticated way story goes usually, between client , server exist proxies, , these proxies inspect http requests , responses, , try make sense out of them, handle different requests differently. example, say, somewhere between client , server there may caching proxy, may treat requests exact same url identical , therefore cacheable. so, path parameters necessary differentiate between different resources, otherwise caching proxy might ever forward request server once, , return cached responses clients thereafter. furthermore, caching proxy may need know request-response exchange resulted in failure due particular error such "permission denied", again not cache response, otherwise request resulting in temporary error may answered cached error response forever.
so, questions are:
besides "familiarity" , "least surprise", other reasons there following rest best practices? these concerns proxies real? caching proxies dumb cache rest responses? hard configure proxies behave in less dumb ways? there drawbacks in configuring proxies behave in less dumb ways?
it's worth considering you're suggesting way http apis used designed 15 years or so. api designers tending move away approach these days. have reasons.
some points consider if want avoid using rest on http:
rest on http efficient use of http/s transport mechanism. avoiding rest paradigm runs risk of every request / response being wrapped in verbose envelopes. soap example of this.
rest encourages client , server decoupling putting application semantics standard mechanisms - http , xml/json (or others data formats). these protocols , standards supported standard libraries , have been built on years of experience. sure, can create own 'unauthorized' response body 200 status code, rest frameworks make unnecessary why bother?
rest design approach encourages view of distributed system focuses on data rather functionality, , has proven useful mechanism building distributed systems. avoiding rest runs risk of focusing on rpc-like mechanisms have risks of own:
- they can become fine-grained , 'chatty'
- which can inefficient use of network bandwidth
- which can tightly couple client , server, through introducing stateful-ness , temporal coupling beteween requests.
- and can difficult scale horizontally note: there times when rpc approach better way of breaking down distributed system resource-oriented approach, tend exceptions rather rule.
existing tools developers make debugging / investigations of restful apis easier. it's easy use browser simple get, example. , tools such postman or restclient exist more complex rest-style queries. in extreme situations tcpdump useful, browser debugging tools such firebug. if every api call has application layer semantics built on top of http (e.g. special response types particular error situations) lose value of tooling. building soap envelopes in postman pain. reading soap response envelopes.
network infrastructure around caching can dumb you're asking. it's possible around have think , inevitably involve increased network traffic in situations it's unnecessary. , caching responses repeated queries 1 way in apis scale out, you'll need 'solve' problem (i.e. reinvent wheel) of how cache repeated queries.
having said that, if want pure message-passing design distributed system rather restful one, why consider http @ all? why not use message-oriented middleware (e.g. rabbitmq) build application, possibly sort of http bridge somewhere internet-based clients? using http pure transport mechanism involving simple 'message accepted / not accepted' semantics seems overkill.
Comments
Post a Comment