Bill de hÓra wrote an excellent post on Snowflake API, in which he speculates about two developments and one debate for data APIs in 2009. The two developments, viz., "putting links into API data" and "standardisation of feed metadata" are achievable, despite resistance by some who may find that consistently providing metadata and links is unnecessary. That resistance is easy to understand, but however, it is important to realize that serendipitous reuse depends on such consistency.
The debate that Bill mentions, viz., "should there be that many custom formats?" will continue to happen, and I am not yet convinced of the options available and prescriptions being made, and think that there is a lot of work that needs to be done both at the conceptual level and the software level.
On the conceptual level, we need to continue to experiment with general purpose as well as specialized formats, probe their programmability, extensibility and reusability, and then identify patterns/models that are pragmatic and practicable. IMO, most of the current debates are a tad bit academic and do not yet demonstrate practical experience. I suspect that this debate will not settle without serious experimentation by folks interested in various formats coming together before inventing "SiteML"s.
The amount of work that needs to happen at the software level is much more. The programming APIs that I am aware of, particularly on the client side, are barely scratching the surface of linking and metadata. Most client side code is still tied to URIs and not media types and link relations. Assuming that we solve those two problems, the next challenge is to take the data format debate to the software level. In the mean time, translation layers that smooth out disparities between formats may help, but it is still important to solve the problem at its root for efficiency reasons.