Hacker Newsnew | past | comments | ask | show | jobs | submit | zdne's commentslogin

Is it that you disagree on the topic? There is nothing subtle about the fact that most of the API tools development goes towards fixing something that does not even have to be there.


Wow, I really disagree. I'm exactly your target consumer, too. My company maintains several data aggregation platforms that combined are consumers of ~40 third-party web APIs.

First, the premise of the article is that Open API is for generating documentation. Our company has never meaningfully used it for that. We do send around API docs in Open API format, but it's maybe 5% of the value we get from Open API. Mostly we use it as a language-agnostic definition that can generate clients and servers for us.

> goes towards fixing something that does not even have to be there

I take it that (in your article, anyway) you mean API documentation?

I'm never not going to need API documentation because it's not just syntax. It also tells me how the vendor's object model works. It tells me the right order to make API calls, how to know my rate limit, and how to choose permissions for my API keys that are as narrow as possible.

I think your product has several premises:

1. Many APIs (like SMS providers) are interchangeable except for their request/response schemas

2. It's painful to create an abstraction around a specific vendor's API client

3. It's painful to switch from one API that does (roughly) the same thing to another

4. Many companies are switching between API vendors regularly (or they want to)

In my experience 0 out of 4 of these premises are true. Some of the simplest services (like transactional SMS and e-mail) are not interchangeable, and as you get to something more complex (like the Hubspot API I was working on with this morning), then it's not even close. You can't swap out Salesforce for Hubspot, for example. Their products compete, but their models are different, and (most importantly) I already have data in one vendor that I can't just instantly migrate to another one.

Most of the time I'd say it's not even worst the effort, but for people who are afraid they'll have to switch API vendors, it's easy to just wrap it with generic method calls and let the implementation details live in one place.

Need to swap it out? Spend 30-60 min building a new class that implements the same wrapper interface and you're done.

In reality, people just don't switch APIs that much, and the major vendors (like Twilio) just keep supporting the exact same API for many years in order to accommodate those people.

Years ago when we had SOAP and XML and lots of messy authentication, I could see there being a reason to use a registry of API abstractions, but nowadays the heterogeneity of REST APIs is a mild enough inconvenience that it's not the worth the price to "fix" it -- the price being losing first-party support from the API vendor and having yet another layer to have to debug.


Thanks for taking the time to write down your thoughts! I deeply appreciate it!

My bad I was not concrete. What I've meant by "this/that" the entire "find the docs - read the docs - program/generate your client using the docs/OAS - maintain your client forever" when we can have application that will do all this without our intervention.

> I'm never not going to need API documentation because it's not just syntax. It also tells me how the vendor's object model works. It tells me the right order to make API calls, how to know my rate limit, and how to choose permissions for my API keys that are as narrow as possible.

Ideally, you should only need to understand the business logic of the API and not how to achieve it, that part can be figured out by your app.

Self-integration approach does not force you to harmonize the offerings. It just abstract you away from the APIs using the business layer (as you are already doing using you classes).

You might try to harmonize some of the vendors if that is what you like, but that is not the gist. With Some vendors and in certain domains it might make sense - that's why we see domain integrators. With others, it either does not make sense or it is not commercially viable. Either way, you can have your app integrating on the abstraction level without you caring about the technicalities.

The sad truth is that vendors in the first place don't want their offerings to be interchangeable. But that is for another discussion.

Also, it is great to hear that codegen from OAS works for you!

> losing first-party support from the API vendor and having yet another layer to have to debug

1. if your app self-integrates you will not loose that 2. you have this layer already in your hexagonal / clean architecture wrappers, what is worse, everyone is bikeshedding, writing this part over and over


Big fan of what you are doing! But can you elaborate a bit on TAM? Isn't the possible market too small? https://github.com/coffeemug/defstartup/blob/master/_posts/2...


Great question, that's something that we've been thinking about for a while and I want to start out by defining what our market is. Given the various categories of offerings in the market today amongst managed hosting, DBaaS, and PaaS, Scaphold falls under the PaaS side of things. We do offer plenty of value-added features that attempt to make server-side development a lot more hands-off.

However, at second glance, you could argue that Scaphold actually creates a new take on PaaS. Previous services have faltered at trying to do too much in one platform by essentially wrapping and containing all the services you need underneath one shell. So we're treading in unfamiliar territory where we think there's actually a lot more value in providing an interface for all your data across the web, as opposed to a complete wrapper around it in a self-contained manner. This way, developers can actually get the best of both worlds:

1) Having the benefits of using a backend as a service platform that offers value-added services like hosting, performance monitoring, tooling, app management, etc. And... 2) Flexibility in picking and choosing what services you need (like Auth0 or Stripe), tying in your own custom logic (through a provider like AWS Lambda), and even perhaps the database layer as well.

Essentially, the vision is to allow developers to think as if they were rolling their own backend, while stripping out the time-consuming aspect of connecting these pieces manually. It's a much more modular approach where we sit as the hub of all your data across the web.

To bring this back to the original point about TAM, this would ultimately open the door for us to a much larger market that includes any cloud service customer since they're not married to Scaphold as a backend. Scaphold essentially becomes an extremely versatile way to tie in your data that's hosted just about anywhere and combine it with the existing services you already use. That also reduces the pieces that we have to manage as well.

My answer is by no means supposed to be a definitive way of thinking about TAM, but merely one way that we're evaluating it. If you or anyone reading this has any thoughts on this, we'd love to talk more privately about this and the direction we're taking Scaphold.


At a glance it does sound like a very small potential market which makes me curious as to why YC would fund them. Perhaps there's a belief that the market will grow as these tools lower the barriers to entry (facilitated by GraphQL) for building applications?


The fast-growing cloud service market at large is one we're trying to tackle, and you're correct in that GraphQL does indeed provide a lot of value in helping us get there.


The thing is, why APIs aren't a mix of domain vocabularies? Why the isolated silos with no interlinking?

The Web grew strong because sites were interlinked. REST APIs are supposed to be generalized Web and yet they are missing the links! What Went Wrong?


The Web grew strong because of the combination of hypermedia and humans. When they look at a hypermedia document like HTML, humans can easily infer context, relevance, value, and a host of other hard-to-mathematically-define things.

I'm not sure REST APIs are supposed to be a generalized Web. In my view, REST APIs are simply a loose protocol on top of HTTP that allow computers running human-written code to communicate with one another. In general, there are several human-driven processes that must occur before REST APIs have any value. Example human involvement: Is this REST API valuable to my business? How much does this REST API cost to use? Does their domain model match ours enough to extract value from integrating with their REST API?


HTML pages are a sufficiently limited domain that integrating them is easy. More full-featured domains are progressively harder to integrate, scaling superlinearly with the possibilities added. (Probably exponentially.)


That's an excellent point! It's sure the case for the "building API to leverage (pardon me the word) the channels" scenario. These pre-agreed scenarios couldn't be automated. But other business drivers (direct data monetization or added value) would still benefit greatly from M2M approach.


That is my point: We don't need to understand the communication means. Instead, we should just declaratively say what we want the machine to do, and it will figure out the rest.


Yup, AGI should be able to "figure it out" at the runtime (when the two machines meet).


The results are relevant only if 1. the name of API contains the term or affordance you want to use 2. you are a human being (you cannot perform such discovery as a machine)


Thanks for the review! I didn't mean the article as a marketing post, but I wanted to share my (long) thought process.

Nothing in the article is new in the concept, but maybe™ the time is now right. Frankly, what the part I'm concerned about isn't the semantics sharing at runtime, but it's the de-coupled, declarative approach in writing the clients.

With hypermedia, we've failed at the gates of client development. The devs tend to tight-couple their code with APIs, ignoring the consequences. If there won't be an incentive on client's side, then nothing from the article will matter.


I stopped reading at "Aliens". The Turk explanation was so patronizing IMO it reduced my tolerance for any other apparent nonsense not related to the topic at hand.


Do we really?


Fair point! I'll try to answer with a question: So why is it that Google, Microsoft & Yahoo cooperate on schema.org to establish shared vocabulary?

They don't have to make it interoperable per se. It'd be enough to use some terms from a shared vocabulary (user, account, address) and then have some business-specific terms.

This way the business can use an existing library that knows how to handle user profiles. It's not that the full client has to be generic, a UI component that knows how to present a portion of a dictionary is enough.


> So why is it that Google, Microsoft & Yahoo cooperate on schema.org to establish shared vocabulary?

Reduced differentiation between underlying services drives them from product world into commodity world. Lower margins and stronger competition at that level certainly benefit the big players. Services in question, maybe not so much.


Great counterpoint.

I guess the availability (technical possibility of building) of interoperable API does not mean that it will be used in every case. It would be used in cases where there is an economical benefit and vice versa.


> far less data

/author of API Blueprint here/ I would not say this is the case. API Blueprint focuses on different data for various reasons (design-first, documentation-orinented). In order to bring the Swagger support we had to introduce some extension in the format. We are, however, trying to unify the tooling under the hood of the refract project (https://github.com/refractproject/refract-spec/blob/master/n...) through (de)serialization plugins for the Fury.js library (https://github.com/apiaryio/fury.js)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: