Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How do start-ups deal with (black hat) hackers?
9 points by ia on July 21, 2008 | hide | past | favorite | 22 comments
With all of the start-ups coming out of YC, I would imagine it's just a matter of time before one falls prey to a successful attack--sql injection, xss, etc. Any start-uppers on here have experience defending their turf? How does a cash-stretched start-up devote enough time to security when a million other things are calling for attention?

Edit: I should clarify--I expect the majority of the HN community is aware of the best-practice solutions to harden a new website against everyday security concerns. What I am curious about is if anyone has dealt with a particularly sophisticated attack. What was the fallout? Was it completely successful? How did you recover? I agree that a new start-up is a very small target, but... a target is still a target.



A decent server-side programming environment will provide protection against SQL injection by default - if you're using a database library that requires you to glue strings together and escape arguments manually, stop using that and use something that does safe parameter replacement (you can write a function to do that in about three lines of PHP, for example).

XSS is pretty easy too - use a template mechanism that escapes all output by default. Django has thankfully been doing that for quite a while now; I think there are Rails plugins that will do it too.

CSRF is the scary one, mainly because most developers still don't know what it is or how it works. Look it up online and spend some time implementing a decent mechanism for adding hidden tokens to your forms that are derived from your user's cookies. Django has CSRF middleware but it's a bit of a cludge; Rails has a pretty good solution for this as far as I know.


Raise the barrier to entry as high as you reasonably can.

If you aren't securing your servers, running a good firewall, logging a solid audit trail remotely, sanitizing your inputs, parameterizing your sql, encrypting your credit cards, and hashing your passwords, you're failing your customers. If you're doing all of that, and are consciously thinking about security as you go, you're doing alright.

Frankly, if you're doing that, the benefits of attacking you usually fall below the effort required. There's plenty of sites out there with SQL injection vulnerabilities and un-encrypted credit cards in their DB. There's a million sites with giant XSS holes. If you don't have those holes, they'll leave you alone.

The bigger worry at that point is dealing with a DDOS, either network level or service level. Much harder to protect against, and much easier to launch against you, and a much more measurable bottom dollar impact.

Don't forget about security. I know you're cash-strapped, I know you have a million features you want to build, bugs you need to fix, etc... But put yourself in your user's shoes. They've entrusted you with their personal information and possibly their credit cards. Your highest priority is to protect that data and live up to their expectations of privacy and security. Display bugs in IE take a distant second place to that responsibility.

I've dealt with a large number of attempted attacks both on my small sites, and when I worked for a Fortune 500 doing security architecture for their e-commerce sites. My stuff is usually pretty secure, and I have lots of notifications about suspicious activity giving me visibility into who's prying at the door. Thankfully no one has actually gotten in (that we're aware of) so we haven't had to respond to that, although we did have plans for that eventuality involving mostly our legal team and the FBI. Frankly other than capturing as much audit trail data as you can on a separate internal secure server to provide to the LEO, there's not much you CAN do. California has a number of notification and disclosure laws which cover a number of scenarios. Good karma probably means you should abide by those laws for your non-California customers as well.

I'm happy to answer any specific security questions you might have via e-mail (devon@digitalsanctuary.com), although I'm sure there are a large number of even more qualified people than me out there.


One thing I'd say is to never underestimate the ingenuity of crackers. We had a system that had an email unsubscribe page, and under some circumstances, on a sub-screen of that, if you gave incorrect details, because it was the second step I had assumed that the user had already identified by entering their own email, so I would print out the email.

A few weeks into that, my test addresses in our system started receiving the odd spam. Took me a few days, but eventually I noticed that on occasion there would suddenly be thousands of errors coming from the unsubscribe page, as they scanned through the user_id's from 1 to whatever, and I made the connection and so was able to fix that particular leak. Fortunately the spam stopped then! I guess they didn't bother to keep a list of the addresses.

Even if I'd been aware of that hole, I would have thought it very unlikely that a cracker would find it and abuse it within weeks of the business launching.

Anyway, so since then:

* I am extra careful whenever printing out personal information, particularly valuable stuff like emails, to make sure the person has properly authenticated first.

* I don't assume that crackers won't get somewhere just because it seems far-fetched. I don't know what tools they use, or whether it's just being observant, but they're pretty good at finding holes, and quickly too.


rule of thumb: sanitize your inputs.

if you do this correctly and thoroughly, you're better off than like 70% of the web apps out there. doesn't take much time, just requires that you set up everything and remember to do it for everything from GET/POST to cookies to manipulating the DOM.

edit to clarify, based on child comments:

sanitizing inputs doesn't necessarily mean you apply one filter to everything. it just means that you're ensuring that what you're taking in from the app is something your code is expecting and can handle. if you're expecting a sql query or some unescaped html, you don't need to filter out the query or the html.


For Rails folks, we wrote a plugin to sanitize the entire params hash in a before filter. This is protection against XSS attacks.

http://code.google.com/p/sanitizeparams/


Not a rails specialist, but I don't understand the point of sanitizing input? It seems to me the correct place to sanitize is output (in HTML or SQL). Otherwise it will all get messy - "escaped" text ends up in the database and might be escaped once more during output (if you use the typical Rails h() method). And what do you escape, SQL, or HTML, or both (and don't forget JavaScript)?

Also sometimes the user might want to save code snippets in the database, would be bad to escape them.


Sanitizing input is one of the best defense's against the Cross site scriting and sql injection attacks, since those account for most of the security attacks, also equally important is to html encode the output so that it does not contain any characters which could be used in these kinds of attacks.

Cross site scripting and sql injection attacks could almost be prevented by checking the input for the unwanted characters and in case if they got in to your system second level of defense is to html encode them when you ouput on the screen.


Sure, sanitizing on input doesn't fit all situations but I think it's good for many. Note, I'm only talking about XSS (think preventing iframe and script tags).

1. The number one reason is that it reduces programmer error. If we never store corrupted data then we're never at risk for displaying it. This is a practical concern because I've observed I and other programmers I've worked with often make errors. I have no knowledge of whether or not this is true for all programmers.

2. Nothing gets messy because we don't escape anything. Certain types of data simply aren't let in.

3. There are some edge cases where we do relax the rules, primarily for the admin user.

4. People should still be able to store code snippits but they need to escape them first.


there's a difference between correctly sanitizing your inputs and reformatting everything you take in.

you don't restrict all things to the same conditions. if you need to take in text that shouldn't be escaped, don't escape it. just make sure you deal with it correctly.

edit:

if that doesn't help and you still are wondering why you should sanitize inputs, read up more on xss and sql injection vulnerabilities.


Surely this approach means it's impossible to discuss XSS attacks on a forum-style site running that filter?


That's just my game.

First, you have to be purpose driven in your start-up. Everything you do has to have a purpose, security included. So, ask yourself what needs securing and why. Will an attack cripple your finances, reputation or maybe something else? Once you know what you are protecting and why, you can then make a better decision regarding the options available for protecting that resource. You then have to balance your needs: is it more important to pay (in time or money) for a new widget or business thing or do you need to devote that resource to security first.

Now to the more high-level stuff you asked. 1) Yes you will be attacked but the sophistication will vary depending on the attacker and their purpose. 2) You will probably survive, just a little bruised but smarter for the experience.

I have worked with large and small banks before, during and after attacks (different banks, at different phases). Some of the attacks were very well targeted and sophisticated. However, all of these banks are still opening and managing financial accounts. However, they cannot ignore the problem and assume it will always turn out that way.


I think the best advice is to take a common sense approach. Do the things you would normally do to protect against things like this and don't sweat it too much. As a startup, you're a pretty small target. I doubt you'd even register on anyone's radar.

It seems like the motives for a lot of these attacks involve extortion or high profile destruction. You're basically a worthless target to anyone interested in either of these.

If you've put effort into protecting yourself you're making the decision to attack even more difficult for a blackhat. If it's going to require about the same effort to attack you and some larger target, chances are they'll go for the bigger fish.


The one caveat I'd toss out there, in the XSS realm (which can be used to deface your site, steal user credit cards, etc...), is to be careful about 3rd party javascript includes. Those players are really big fish, since a successful hack there, also means they've hacked 1,000 customer sites at the same time. Including yours.

Treat externally sourced javascript like a snake. It might be fine, or it might be poisonous. Try to use local javascript as much as possible, and use SSL when you can (to prevent man in the middle and most cache poisoning attacks).


Exactly. I used to sweat security too much until I took a detailed security analysis to our CIO once and she said, "Yes, these are all risks, but what are we going to do if the building burns down?"

There are more potential disasters than security breaches and many of them are both more common and more disastrous.


If the building burns down you haven't given 100,000 of your customer's credit card numbers to the Russian mafia.

Your building, and most of it's contents are insured and replaceable.


I'm pretty sure the business disruption of your HQ burning down is an order of magnitude greater than losing your customer's credit cards.

On the credit card front, there are a number of processes that limit the damage. You can report the loss to both your customers and their card issuers. This will put them on alert for fraud and they can be issued new cards. Credit cards already come with fraud protection by default so none of your customers are actually on the hook for fraudulent activity. The greatest hit is probably taken by the merchants where the cards are used. During all of this, your business continues to run.

On the fire front, in many cases business is effectively halted. Some people are setup to work remotely immediately, but do they have contact information for everyone they need to work with? Some people have no capacity for working remotely. It's almost guaranteed that you lose some data which has not been properly backed up. And then you spend some period of time replacing all of the things that made you want an office in the first place.


You raise some good points, but the original question was specific to startups, so I'm making the assumption that 90% of the business really lives on hosted servers or is backed up on hosted servers somewhere else, and everyone has the ability to work from home on weekends, etc...

On the credit card front you can only report the loss if you know about it. Lacking property security, audit trails, and alerting mechanisms, most people never know when their site's been compromised. Vast numbers of data theft incidents occur through SQL injection, exposed data files, plain-text user data being e-mailed around, etc... And you'll never know about it. Having proper security and encryption processes, rules, etc... makes a BIG difference.

I'd also argue that for you and your 6 co-workers having the building burn down is a bigger deal than the data loss. But for your 100,000 customers having to deal with potential CC fraud and identity theft issues, is a MUCH bigger deal. Many cases of identity theft and associated credit and collections issues take YEARS to fix, if ever, and cause a great deal of hardship in the mean time. Your business can be up and running quicker than that.


Just to clarify. I told the fire story as an anecdote. Most startups should have neither a large HQ or 100k credit card numbers. But even if they did, there are more risks to the business than just hackers. The primary one is probably that nobody likes your product.


http://www.modsecurity.org/ can provide an additional layer of defense for your web apps.


And, if you're an average developer, it could eat literally days and days of time with frustrating and odd failures in corner cases with your web applications.

If you build an application, I suppose doing it on systems with mod_security already present would allow you to grow to avoid pains with it. But adding mod_security later is a tremendous pain and overnight can turn a working (but complex) web app into a field full of random, hard-to-find sinkholes that confuse users with random HTTP rejections.

On one of our more popular projects, after adding in a dozen or more rule exceptions for each false positive (and not seeing a light at the end of the tunnel), we just had to give up on using mod_security. It was just getting way too many false positives and eating up time we could spend improving our user experience.

There's probably a whole lot of reduced security profiles, advice, etc, that decrease the noisiness and increase the ROI, but there's only so much time we were able to invest in adding this additional layer. We spent 250% of the expected time with no better learning experience except the following: adding mod_security without impacting your users seems awfully hard for a decent-sized web app.

If you're going to use mod_security, build your app with it in the picture from the beginning. Or learn from someone (who will hopefully comment on this thread) how to enable it in such a way as to eliminate the false positive rejections which can frustrate your users.


completely ditch their rules and start fresh. use it as a simple last line of defense solution protecting important things from leaking out.


The vast majority of security issues we see (and we see a lot of them--systems management has been how I make my living for about ten years, likewise for my co-founder) are not related to web apps at all. They come from basic misunderstandings of the underlying system and the software that runs their services.

So, here's the first three recommendations I make to pretty much everyone concerned about security (which should be everyone with a machine connected to the Internet):

1. Use strong passwords. A strong password is 8+ characters long, has numbers, letters and optionally symbols. A strong password is changed every year or so. A strong password does not contain dictionary words.

2. Update your software. If you use Linux, use one that has good package management and a long lifecycle so that packages are easy to install and will be available for the three+ years that the average server is in service. CentOS/RHEL and Debian are the only ones I really trust on servers. Ubuntu LTS releases will do, in a pinch. If your system does not make it really easy to do this, then you need a better OS. Imagine the worst possible time for a security problem to show up--you're at the beach with no WiFi for miles and all you can do is call someone to walk them through the process--and if that sounds scary, then you've got a problem. It's pretty easy to say, "OK, login to the system using PuTTY (or better, Webmin, but I won't demand that everyone run our software) and type 'yum update httpd'", or 'apt-get install apache2', if it's a Debian system. But imagine, "OK, go to Apache.org, and click on the download link...oh, wait, click on HTTP Server, and then click on download. Now download the latest one. No, not to your local machine...download it to the server. Use wget. Yeah. Type wget and paste the link from the Apache website. OK, now untar it. No, type in 'tar zxvf blah-blah-blah'. OK, look for the previous Apache version directory, because we need to copy the default configuration over. I don't remember what it's called..." etc. Careful with stuff installed from sources other than native packages is what I'm saying here. You'll have to do it for a few things, obviously, and your own app will probably be in SVN or git, but don't make it a habit to get everything from non-standard sources. It's just a security disaster waiting to happen.

3. Don't run unneeded services. If you don't know what it is, google it. If you don't need it, turn it off. If it exposes a port to the Internet, keep a very close eye on updates for that service. CentOS/RHEL and Debian usually roll out security fixes within hours of exploits being discovered...this is usually good enough. If your database doesn't need a public port, run it on localhost. If you do have a separate database server, see if your host can give you a private network for your web servers and backend database(s). Some won't even charge extra for private connections between multiple boxes in the same data center (though most will charge a few bucks). If they're not on the same physical segment, this generally won't be possible, though.

This is the low-hanging fruit, and should be standard practice for pretty much everybody with servers to manage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: