APIs are a Pain

I don’t like writing client apps that use APIs. Writing HTTP client code to talk to APIs is verbose, repetitive, chatty and slow. This is in addition to addressing latency and bandwidth constraints and core functionality of the client app – such as building a snappy UI or supporting some other business use case.

First – the interface mismatch. APIs are almost always designed from the producer’s point of view. If I’m building an API, I would look at my data model, my performance/scalabilty requirements, and design the API to meet the use cases that I think are the most common across all consumers my of my API. That is, my goal would be to maximize the use and reuse of my API. In this process, I am bound to tradeoff special requirements of my consumers and try to stick to a common denominator. This creates a mismatch between what the consumer needs and what I’m offering. Here is an example.

A client app needs to search a list of products to get their IDs, some details and some user generated content like reviews for each product. There are three APIs that the client needs to interact with to get this data – one to search products, one to get details for each product, and another to get reviews for each product.

This is not a made up or hypothetical example. What the client needs is this.

A single API, that takes a keyword, and returns certain fields for each product found – no more or no less.

What the client got are three APIs that give some data that includes what the client needs plus some more. The API design makes sense for the producers of these APIs – search folks are focused on indexing and serving low latecny results, the details API wants to serve all the gazillion details that are relevant to products, and the reviews API wants to focus on low latency writes and potentially stale reads. But, for the consumer, an ideal interface is one that gives just the fields it needs with one single request.

Second – writing client code is slow, repetitive and verbose. For the above example, here is what the client needs to do:

  • Call the search API to find products, and extract product IDs from the response.
  • Call the details API n times – once per product ID – to get the details, and then extract the fields needed from the response
  • Call the reviews API n times – once per product ID – to get reviews, and the extract the fields needed from the response

Unless you got canned SDKs for these APIs, writing code to make 2*n HTTP requests and process the responses can take a while. In one of the Java implementations I looked at, the code was over 300 lines long – that was after excluding the request making and response parsing code which was already factored into into supporting libraries. If the client needs to implement ten such use cases, you got 3000 lines of code just doing data retrieval.

Third – Getting enhancements you need from producer teams slows you down. For my example above, having a bulk API for details and reviews simplifies the client to some extent. But for both good and bad reasons, API changes that the clients need don’t happen immediately. They take time, usually some sprints. Sometimes never. The reality is that teams have their own priorities. Getting a single API for the above won’t likely happen!

Fourth – requests for use cases like this are chatty. Three hundred lines of code may not be a big deal, but making so many HTTP requests is. Here is a common evolution of an implementation. To simplify the discussion, let’s assume that we have bulk APIs for details and reviews.

Take 1: Use blocking IO to make 3 HTTP requests.


Code complexity of this implementation may be low, but it takes sum(t) where t is the time for each API request.

Take 2: If latency is a concern, parallelize the requests. After the search API returns product IDs, you can make 2 requests in parallel and join the responses.


Code complexity now increases, but it only takes t1 + max(t) for the whole retrieval, where t1 is the time taken to search.

Now imagine that the client needs to call yet another API based on review IDs.

Take 3: Orchestrate the retrieval based on dependencies.

Painful dance

In another example I’ve seen recently, a native app makes 17 HTTP requests with some dependencies before painting the UI. The code is over 3000 lines long! Of a team of 3 developers, one developer is dedicated to writing and maintaining this code.

Now move the client farther from the API servers. In addition to the code complexity, the client will have to deal with cost of the network. You may want to move all the request dance to some middle tier that is closer to the API servers than the client.

Middle tier

Fifth – consistent RESTful APIs don’t matter as much as we think. I would love to see every API to be RESTful, consistent, hypertext driven, and more importantly, interoperable. The reality is that getting consistent APIs is hard – particularly those that are done by distributed teams. For the record – I vehemently hate SOAP APIs and the SOAP mindset. I dislike RPC-style operation names tunneled over HTTP. I frown and cringe whenever I see unnecessary custom headers and complicated formats. I wish POST+XML goes away. I wish every API gets rewritten to the modern understanding of HTTP and REST, serving JSON.

But I would rather spend my time enabling interoperability than preaching for consistency.

Hypermedia does not help either. Hypermedia can help navigation among resources, but navigation is not a concern in the above use case. The client app’s challenges are aggregation, parallelization, and orchestration of forks and joins.

So, what can we do about it?

Why would I write a long blog post if I have nothing to offer?

I’m excited to reveal that, at eBay, my team has been working on a new platform to simplify use cases like the above:

  • Bring down number of lines of code to call APIs
  • Automatically orchestrate API calls as needed
  • Create new consumer-centric APIs that wrap existing producer-centric APIs
  • Cut down bandwitdh usage and latency

Best of all, this platform will soon be on github. If you are interested in taking a sneak peek and want to provide feedback, please email me (subbu/at/subbu/dot/org) with your github ID. Please see for more info.

Write a Comment



  1. How interesting. I’m also starting to work on things; perhaps we will collaborate. Or do battle. :)

    I do hope it’s going to be small pieces, loosely joined.

    P.S. would love to see what’s happening on github.

    • Hi Mark – thanks for dropping by.

      I would love to be able to collaborate. The reason for pushing it out to github is help that happen. CLA should be ready soon too. Lawyers have their own clocks!

  2. I’m interested in the basic idea behind the new platform. I strongly doubt that there is a silver bullet: complex, use-case-specific queries either require complex combinations of API-calls or complex, use-case-specific APIs. The only alternative to your REST-scenario is creating a more powerful query-API to do the joins and aggregations at the server instead of the client. The MediaWiki-API is a good example. But no platform will solve the engineering problem to decide whether a concrete aggregation for some type of query should better be done at the client or at the server.

    • The premise here is that complex use case specific APIs better be built consumers and be run either on the consumer or on some middle tier. I just opened access to your github account nichtich. Thanks in advance for any feedback. You can reach me at subbu/at/subbu/dot/org.

      • I had a look at the code – unless it better specified and ported to more then one programming language, I does not look very much like a general solution to the problem of APIs, but like an particular piece of software for a very specific context. However, the general concepts and ideas behind this software could provide a general solution, so I don’t argument against it. I also created an issue in github.

  3. Agree with Jakob. Tooling that eases the experience of building clients or composed resources would be useful but can’t be a solution to something that is primarily a question of design. Also; aren’t there existing toolsets/platforms already tailored to this use-case e.g. node.js ?

  4. Hard to comment on your solution without seeing it, so we can only comment on the problem you’re outlining. Well: it does look like you’re creating a “straw man” style of argument, here – a problem that will be perfectly addressed by your solution, when it’s revealed!

    The real issue is that APIs aren’t RESTful, of course, as you pointed out yourself.

    But I don’t think that it’s actually all that hard to fix!

    We need to start by defining JSON formats for a range of common data types: contacts based on vCard, events based on iCalendar, news based on Atom, publication info based on Dublin Core, etc. Hit the 80:20 point there. Then gradually add more formats and fields to cover additional common cases.

    This is the approach I’m proposing in the “Object Network” – – which will define these JSON Object formats and how they all link up on the ‘Net.

    Here’s the intro:

    “Instead of writing a whole new, dedicated HTTP API to your site, publish your data using common JSON object formats, and link your data up, both within your own sites and to other sites. Become part of a global Object Network!”

    So what’s /your/ proposal, Mark? Perhaps we could all get together and build the Object Network!! :-)

    • Duncan – let me know if you’re interested in checking out. I’m working on opening up for a preview this weekend.

      Actually, I’m not convinced of defining new JSON formats helps the client here. The costs I point out are not related to lack of a consistent formats, but have to do with the natural differences in consumer-vs-producer needs, aggregation and orchestration of requests.

    • @Duncan, Microsoft created a similar mashup canvas, based on JSON implementation of the AtomPub protocol. I think it was called Live-Mesh, and you could create mesh-enabled applications in the Live services canvas in the browser. For that matter yahoo pipes is a similar concept. I think both those projects languish as I havent heard much buzz recently. Is that the kind of thing you’re proposing for the Object Network?

      • Hi Dilip (if you’re still watching this page!)

        Not sure about that MS thing, doubt it has much to do with my ideas. Yahoo Pipes was a graphical mashup generation tool, and I seem to remember that it, like many other such tools, was dominated by the feed concept. Again, not sure it’s anything like the Object Network. I’m not aware of anything that’s in the same space. I’d be delighted to be shown stuff I was unaware of, of course!

        The Object Network is hopefully much simpler and easier to understand!

    • Duncan, on the contrary I would argue that the *more* RESTful a service is, the more the need is for Subbu’s solution. When you have many clients – all with their own individual requirements – all trying to use the same API this problem becomes very obvious.

      • on the contrary I would argue that the *more* RESTful a service is, the more the need is for Subbu’s solution.

        You’ve no idea :)

        See the SOAP example on to see the pains of using a SOAP API. Still simpler than how it is traditionally done by SOAP-era tools, but more complex than using the GET version of the same API.

        Secondly, the days of a client using one API at a time are long gone. Modern client apps (web or native) need to fetch more data from APIs in order to personalize the experience.

        • Of course, I meant APIs not API. And agreed – if there is only one way of getting the information from the consumer, you jump through hoops in order to create a product that differentiates itself.

  5. Subbu,

    what happened? you complain about the world you guys created? I must be dreaming. I had to read your post multiple times to start believing what I was reading:

    >> I don’t like writing client apps that use APIs. Writing HTTP client code to talk to
    >> APIs is verbose, repetitive, chatty and slow.

    But you guys told us all along that the interface was “uniform”. What happened? How come 99% of the world creates APIs then? Do you think we, the 99%, are that stupid?

    >> APIs are almost always designed from the producer’s point of view.

    You bet, the 1% of our industry (the Facebook, Google, … of the world) needed “REST” so they could scale their infrastructure to where we are today, imposing in the process the “client tax” that we all pay, i.e. hours, days, weeks of stupid work to talk to these shiny interfaces. You can claim all you want that you don’t like SOAP, but we are back to 1999 and XML-RPC. Everyone is now looking for a contract and tooling.

    I can’t wait to see your solution, but frankly looking at this problem from a “client-centric” view of the world (you mean you are no longer “resource-centric”?) would not get you very far, in the real world.

    API’s are inherent to the lifecycle of the entities they manage. Beyond that, peer-to-peer is the way to go, always has, always will, everything else is just a band-aid. You are looking at the wrong problem by ignoring the very foundation of information.

  6. Spectacular post, Subbu. Nicely sums up my last 3 years of pain points with real world REST API client development.

    I would argue, though, that while API self-consistency is hard, achieving it pays off in spades. For every inconsistency within an API, there exists some one-off client code to deal with that inconsistency, which in turn offloads more work and more code to the client developer. This same principal is true for any API, of course, not just those in the REST space.

    • True, but I found it hard to get consistency for two reasons:

      * Driving consistency requires a stick – and no one likes to be told. We tried this at Yahoo with a “spec” on how to write consistent APIs. But it failed for several reasons – the rules in the spec were written from consistency point of view, and ignored use case specific perf constraints. That spec could not enable on their own. That failure gave me the motivation and material to write the RESTful Web Services Cookbook.

      * At the end of the day, quantifying consistency is not trivial. APIs can’t be rewritten for the sake of consistency without also offering significant pay off in terms of some other benefit.

      That is why I’m bought into the idea of enabling interoperability across inconsistent things to the extent possible. In a way, interop efforts are complementary to consistency efforts.

      • I agree strongly with all of your points.

        As far as quantifying consistency, though, I’m coming at that from the point of view of writing client SDKs for specific APIs (e.g. consider building something akin to ActiveResource for Rails). Specific consistency issues (i.e. lack of adherence to the patterns that the client SDK relies on to generically model and interact with the resources in the API) quickly become identifiable in such an effort, and can feedback positively into the API design if undertaken concurrently with the API’s development.

        Yes, I agree that it’s super hard to achieve consistency, especially with large/distributed teams, and I really appreciate what your current effort will bring to the table even in the best of development circumstances, particularly once backward compatibility becomes critical. All I’m saying is that while it’s very hard to achieve consistency, I think it’s worth fighting for (while doing so is viable).

  7. Subbu,

    Pardon my ignorance as I come upto speed with node and server side javascript. Couple of things that strike me as interesting just glancing through some of the examples and documentation.

    – There seem to be an implicit need for trusted clients with something like this. Or atleast a way to govern and throttle bad players in the client side ecosystem.
    – Secondly, I say this coming from my conditioned SOA background; client is driving the server side behavior and flow; which is considered an anti-pattern.

    Obviously the investment is coming from ebay which means there is a good case for it; much of it is evident in the post. But I’m curious, if there is some anecdotal caveats/benefits that you’re seeing dog-fooding this framework?

    Also I’m curious, from a scalability perspective why the design decision was made to move to a service that can execute “transaction scripts” as opposed to the browser as a canvas? granted that its more chatty, have we just pushed the problem of chattyness to a server?

  8. Pretty interesting ideas, the similar idea came into my mind long term. But due to my knowledge I couldn’t come up with one general solution. Then I worked out one specific layer for client sides, it worked but with some efforts. I have quite lot interesting on your solutions, could you let me take loot at that? Thanks a lot.

  9. Hmmm. Ok, yaql would serve the custom query purpopse for clients, but the entire setup of the article assumes we’re in 2005. Who on earth still operates as described as outlined? So lame to take the neanderthal as an example for a human being, why not look at the latet model? I’ve always implemented OO (no intermediate ql) CRUD on top of service layers, facilitating almost all of the use cases outlined. A nosql architecture makes all of this even easier to setup.
    But again, I can see your solution as an addon to cover the missing 5% (dont fight me over this plz) client use cases.

    • Morriz – you may be missing the point that I’ talking about HTTP APIs here and building OO wrappers is, though common, not productive to write. It also makes clients chatty.

  10. What an excellent article you wrote to advocate and promote what you’re team is doing!
    Yet another layer of software around a faulty design to try to solve problems that would not have existed if the eBay core system was designed properly.
    However you’re only battling your own problems you’ve created yourself as eBay.
    So wonderful you don’t seem to realize that and blaming the SOAP messenger, managers, and throwing a ton of unrelated arguments at us instead of critically looking at your own eBay system.
    You can already solve most issues by rewriting the software running on your eBay servers but somehow you’ve been either told not to touch the eBay side (welcome corporate pride and hierarchy) or you somehow have been stuck with a very limited paradigm regarding the use of API’s and interfaces. Trying to fix it all by writing yet another layer of software around your actual problem.

    We (the people who try to interface to sites like eBay) need a generic interface like SOAP/REST because the existence of SOAP clients on every major platform makes it easier for us to use your interface. Thus generating more sales for you as eBay.
    If performance or data traffic is the problem, then there are enough other well supported options besides writing your own language to do what you should be able to do in the first place.
    SOAP/REST/WebServices interface are all designed with a certain purpose and to support a basic set of features like authentication and internet-connections that might fail at any given point. That is the reason why there is overhead. If traffic is a problem, nobody limits you from using ProtocolBuffers inside SOAP as a means of data transport instead of XML or text. If speed is a problem, nobody limits you by using TCP/UDP binary packets. It is just how much your imagination limits you.
    Yes, it would have been great if a single SOAP request could send back multiple messages in order to improve performance because some data might be available faster than other data on the server-side data system.
    That’s just a limitation of the underlaying transport protocol.
    But you are always free to design a SuperSOAP or RacingREST interface to handle these kind of improvements.

    Also the number of lines have hardly anything to do with complexity and performance.
    For performance reasons it is well accepted to use a larger amount of code, that’s just a trade-off. I’ve seen 16 lines of code replaced by 4000 lines and the performance improved by 150 times.

    Nobody is posing limits on making multiple data requests through a single SOAP call. That is only a limitation you as eBay have decided to pose upon us by only providing us a table based query interface.
    In fact you could wrap dozens of database request in a single SOAP call, retrieving complex related data in a single call.
    But as eBay you are somehow trying to impose a table-based access limitation to us whilst all our problems, including world peace, could have been solved if we just could throw advanced SQL statements at your database layer.
    Ok, forget about world peace.
    But SOAP/REST make our lives easy at a certain cost. But that cost outweighs anything else from yet-another-proprietary interface or yet-another-programming language to be able to do something that we could have done in the first place if the main system was designed properly.

    Start working on stuff that really matters instead of trying to shout out to the world that you’re doing a fantastic job on creating yet another layer of software to wrap up a problem that should not have existed in the first place.

    • Start working on stuff that really matters instead of trying to shout out to the world that you’re doing a fantastic job on creating yet another layer of software to wrap up a problem that should not have existed in the first place.

      Thanks for the comment, but it seems that you may be conflating several things. Producer-centric interface boundaries exist for performance and scalability reasons apart from reuse. It is quite unreasonable and impractical to try to rewrite those to optimize every consumer use case. You can only solve those by another layer of indirection. Today, most of those layers are done programmatically. That’s the scope of this post.

  11. Hello Subbu,

    Thank you for this blog post. It came just in time to influence the RESTful API I was working on at that time. I tried to take your concerns to heart and design the API in such a way that a client doesn’t have to issue many requests to get relevant information for a list of items. I did this by allowing a modifier in the URI, which if present, would result in all list elements being annotated with selected information of the referenced item. So, instead of getting the list and then accessing every single entity, you get all you need in just a single request.

    If you would like to take a look at how I have done this, please feel free to go to and login as user “testuser” (password also “testuser”). Then go to and you will see my take on a human-readable RESTful API. Click on the ‘list of nodes’ then click on “show related” to see what I mean.

    I would love to hear your feedback on this approach, if you have time.

    Thank you again…