You may want to rethink the pricing. I have several server variants that I effectively clone in different datacenters across my deployment. Since the base images are static I would really only need to run the agent on n servers (where n is the number of server variants that I have) to ensure that my entire deployment is protected.
I'm not sure if you would consider this unethical. I would probably feel differently about the pricing if it were tiered levels related to the entire size of the deployment (e.g.: 1-50 servers: $x/mo, 50-250 servers: $y/mo, 250+ servers: $z/mo).
>I would probably feel differently about the pricing if it were tiered levels (e.g.: 1-50 servers: $x/mo, 50-250 servers: $y/mo, 250+ servers: $z/mo).
Ya, that's fair and a good suggestion. I think we may very well end up doing something like that. We're still trying to figure out what kind of model best suits the server fleets people have.
We'll be keeping the first server free, and probably have a not for profit tier.
As an added note, $9 per server doesn't seem excessive when I think of our large bare metal servers, but it starts looking _really_ pricy if I look at our $50/mo m3.mediums in AWS.
appCanary monitors the software on your servers and notifies you when you have to take action. In a previous life, we spent a lot of time worrying about what needs to be updated where and so we built this.
We currently let you know about Ruby vulns deployed on any linux, and vulnerable packages if you run Ubuntu. Support for Docker and other vuln sources is just around the corner.
This looks interesting. I have been looking for a solution to this problem without any clear conclusions so far. Nessus and Qualys have new agent-based scanners now, but I have not tested them because they both only support Red Hat-based Linux distros.
It sounds like for most software you are using the Ubuntu package management system to check for vulnerable versions. Is that correct? And are you planning to add detection for binaries that live outside of the distro package manager? I am thinking of stuff like custom-compiled Nginx binaries for example. I realize it would be non-trivial to implement this but would consider it highly useful at least for a certain set of common software components.
If you can solve this people will throw boatloads of money at you.
Unfortunately it's not easy -- even writing scripts to detect running processes across all our servers to identify Java, Apache, Tomcat, etc etc has proven difficult to get right. Sometimes you can get enough info from extended process list info, sometimes not.
What advantage do I gain from this over periodically running various distro tools that compile CVEs and security advisories into a flat file database and search on top of that, e.g. pkg audit on FreeBSD and glsa-check on Gentoo?
It is targeted more at OS-level vulnerabilities (including IIS) rather than application dependency vulnerabilities, but may provide the solution you're looking for.
How does this work? Do I need to run your software on my servers? A software calling home to some third party seems to be a problem for many use cases.
There's a stunning amount of elbow grease involved in that.
If you're a random company, you have an engineer sitting around whose job involves reading a dozen mailing lists - and we want to save everyone from that redundancy.
Oh yeah I know, I think there's value in what you're trying to do. I would pay 9/m if it also covered application dependencies. I was just curious how it works. What's the time between CVE release and getting a notification from your service?
+1 for Python. You're probably aware of a company called Sonatype that does something like this during the dev process. Their business is growing fast. As far as I know nobody is doing this in production. I think you've found a nice niche that has a lot of potential. Good luck.
1. They all do a great job! But there's this last mile problem with managing the information they do put out.
If you can handle the downtime, unattended-upgrades will work just dandy. If your postgres restarting in the middle of the night gives you pause, our service can help you choose how to roll out your security upgrades.
2. We cover app dependencies as well! For now just Ruby, but others as well pretty soon.
Currently, there is no straightforward way of checking Ubuntu package versions against CVEs. Debian provides this through debsecan[1], but this tool is pretty much broken on Ubuntu[2].
Correct, but if this will reduce the 0-day / 1-day time then it's very useful if your server does anything important. The difference between responding to Shellshock in 15 mins vs 2 hours could be exploitation.
"just" , there are very well paid full-time people working in big name companies doing pretty much this. Auditing is a real big pain, and this is certainly not the first company trying to address this.
I had that exact idea a while ago and filed it into my "ideas that might be fun and might be successful" list. Time to cross it off. Good luck with this, it's a great idea!
>Why are you sending the full file contents from the agent to the client?
1. We only send files you tell us to send in the configuration, and you're not going to be storing any sensitive information in your Gemfile.locks or package.jsons.
It's not functionally any different from us parsing it client side - but allows us to support new platforms without having to update the agent.
>CRC is not a hashing algorithm.
2. You're absolutely correct! Which is why we're not using it as a cryptographic hash, i.e. as part of an HMAC.
We're only using CRCs to determine if a file has changed, which is the purpose of CRCs :).
Do you have any other concerns? We've spent a lot of time being paranoid, and we know it's a hard communication problem.
"You're not going to be storing any sensitive information in your Gemfile.locks"
That's not accurate. When using private gems hosted on github one of the common approaches is to use this in your Gemfile (which shows up in the lock):
Right. I should've been prepared for this response. I can't confirm whether that shows up in your Gemfile.lock but I can say that you really shouldn't be doing this and switch to keys.
We'll likely add a check to beg you to change this in the near future should it show up.
I agree with you there but to this point at least, I haven't seen another good way to handle this with something like Heroku. It does show in the Gemfile.lock though (just verified).
Looking around I did just find a buildpack that tries to solve the problem. That doesn't really apply when using your service on my own servers though.
I guess the bigger question is simply, are you going to limit your audience only to people already following best practices?
An SSL when transferring over these files, just based on the rest of the responses in this thread, would seem to make a lot of people feel better about the service.
>AppCanary then doesn't monitor for the fact I have an outdated version of httpd for example?
It does! So long as you've installed it via the package manager. Right now we support Ubuntu, and Debian will be out soon and RHEL systems in the very near future.
In the near future we'll add the ability to scan hand-compiled binaries, but that's a technical challenge that depends on solving the first part of the equation (knowing what's vulnerable) really well.
I like the idea but most of the servers we manage have out going firewalls to block them from talking to the internet. We produce installed package lists during deployment (as much as possible we run immutable pre-built images and replace the image rather than upgrade in place) which could be sent to a service like this but wouldn't want to start punching holes and adding routes for it. To work as is we'd need to add duplicate canary servers in an isolate environment to talk to the service.
@phillmv - +1 for the Docker+supervisord version !
Would strongly recommend an "in-container" version, so that I can bake your agent into my Docker VMs. Remember that if I run my Docker VM on CoreOS, then it is very hard to install something on the host.
It could be a separate docker container with a volume mount to the docker sock. That's probably the best option, a bit better than baking it into all of your images.
i was wondering if there's an interactive version of this I could run in CI (e.g. test-kitchen), so that I can fail CI when the results of a fresh CM build is vulnerable.
A couple of years ago there was a similar startup called SourceNinja. They used a different method to get the dependency/library info though. It turned out to be not as profitable as they hoped...
"Hey Hacker News! Try out our pilot program." Just sign here.
It's another wannabe startup that asks people to sign up before disclosing terms, or, in this case, anything at all. And they want access to your server. Right.
No business address on the site. A low-rent "domain control only validated" SSL cert. Anonymous domain registration. They do show up as a Delaware corporation, all of two months old:
That's not enough. This unknown, anonymous outfit wants you to trust them to collect info about security vulnerabilities on your site. That's asking a lot.
Remember, in B2B you're selling to the main in the chair[1]:
I don’t know who you are.
I don’t know your company.
I don’t know your company’s product.
I don’t know what your company stands for.
I don’t know your company’s customers.
I don’t know your company’s record.
I don’t know company’s reputation.
Now—what was it you wanted to sell me?
Well, I for one can vouch for the technical co-founder as I know him personally and worked with him before.
But I'm just another HN lurker with barely any karma.
I'm not sure if you would consider this unethical. I would probably feel differently about the pricing if it were tiered levels related to the entire size of the deployment (e.g.: 1-50 servers: $x/mo, 50-250 servers: $y/mo, 250+ servers: $z/mo).