The problem is that it's relatively easy to brute-force attack even a salted hash today.
On the first day, people stored passwords in plain text. If someone got access to your database, they had all the passwords.
On the second day, people decided to hash the passwords so that people couldn't unencrypt them. Then the attackers created rainbow tables that correlated each hash with its associated password since the password would always hash to the same value (so you have "109BCA" in your database, but they have a table that has that hash and that "12345" hashes to that value).
On the third day, people decided to salt the hashes to render the rainbow tables ineffective. Now, each password would hash to a different value so they couldn't just look up the password for a given hash. However, as computing power increased it became easy to just brute-force the password. You have the hash and you have the salt so you can just try every password with the hash until you get a match.
hashed_password = "ACBDEF1234567890"
salt = "12345"
possible_passwords = ["password1", "ilikesheep", "micro$oft"]
possible_passwords.each do |pass|
if Digest::SHA1.hexdigest("#{pass}#{salt}") == hashed_password
real_password = pass
end
end
The problem is that code like that has gotten really cheap to run and it's incredibly parallel (you can have loads of machines each take a piece of the workload - oh, and hopefully no one will make a joke that you'd never write that in Ruby; I just felt that would be easy pseudo-code to demonstrate). You can just try combinations of passwords at random, but there are lists of more common passwords that you can try first making it go even faster. Hashing algorithms are meant to be fast because they're meant to be usable on a large piece of data. As such, it also becomes fast to brute force check all sorts of combinations.
On the fourth day, people started using things like bcrypt because bcrypt was made to be slow. The fact that bcrypt is slow means that if someone wants to brute force it, it will be very slow going for them. If one can check 10,000 SHA1 hashes in a second, but only 10 bcrypt hashes in a second, it will take 1,000x longer to crack the bcrypt stored password (those are just made-up numbers as an example).
Salting is better than not salting because they have to brute force the password. However, as computing power increases it isn't so much better because brute forcing is becoming easy. One needs to use a slow algorithm to make sure that cracking it will also be slow (prohibitively slow). Bcrypt also allows you to specify how much work you want it to have to do. This way, as computing power increases, you can increase the amount of computing power needed to compute the bcrypt. By contrast, hashes are meant to go fast and so every day they're getting less secure.
Salting was introduced in the 1970's to combat the underlying class of attacks that rainbow tables optimize. Think of rainbow tables as a compression scheme; the attack is precomputation.
That said, I don't know why, with today's computing power, people don't make rainbow table chains longer (or shorter, it's been a while since I read the paper) to save on disk space.
Not exactly. I could have a rainbow table for every possible password eight characters or fewer, and find it in a rainbow table in log(table_size) but to brute force it might take several days. GPUs make it so you can generate bigger rainbow tables faster too.
I don't want to sound too critical of it, but it doesn't seem as compelling as the Nexus One was. When the Nexus One came out, it had an 800x480 display which was higher than most devices at the time, it had a 1GHz processor which was considerably faster than competitors, a 5 megapixel camera, 512MB RAM, etc. This new Nexus S doesn't seem to improve on that formula much: same processor speed, RAM, megapixel camera, screen resolution.
Plus, I think there are more compelling options for T-Mobile right now. Both the myTouch 4G and HTC G2 have HSPA+ allowing for much faster speeds than the HSPA that the Nexus S will come with. Plus, I feel that the 2-color-per-pixel model that Samsung is following with its AMOLED displays makes the devices a lot less useful for my primary purpose: text. AMOLED displays don't have the smooth text that is available on all non-AMOLED 800x480 class devices.
The Nexus One was a big leap forward. It doubled the specs we were used to seeing on processor and RAM, was higher-res than anything except the Motorola Droid, and it included a top-notch camera for the time. The new Nexus S seems like it's playing catchup and is, in fact, not as nice as competing devices.
Now, not having to deal with (Samsung|Motorola|HTC|LG) for OS updates and not having their or (Verizon|AT&T|Sprint|T-Mobile)'s crapware installed on it would be really nice. I guess (for me) I'd just rather get a device that supported HSPA+, had a 3-color-per-pixel display, more RAM (the myTouch 4G has 768MB), etc. The hardware isn't severely lacking in any way, it's just the type of hardware that was average for a phone coming out in July of this year.
Can't find the numbers right now, but I'm pretty sure clock for clock, hummingbird is significantly better than snapdragon.
The GPU also matters a lot. One reason why the iPhone feels so awesome despite the somewhat slower CPU is that it's GPU blows 80% of android GPUs out of the water. Once again, I'm pretty sure the GPU performance (if its anything like the Galaxy S) utterly destroys the Nexus One.
The Galaxy S (which uses the 1GHz Hummingbird processor) benchmarks faster on the Quadrant test (which does test 3D performance) than the Nexus One with Android 2.1, but slower than devices like the Droid X. It's likely that the new Nexus S would be faster than the Nexus One, but not as fast as many other devices that have been out for a while.
So, it is an upgrade, but it's below what has already come out from other manufacturers - in contrast with the Nexus One which was an enviable top of the line device when it came out.
EDIT: In fact, the Nexus S looks pretty identical to the Galaxy S sans the Samsung software.
Has the Android UI been rewritten to use the GPU? One of the main reasons why scrolling/panning always feels smoother on the iPhone is that the user interface rendering uses the GPU whereas Android does not. If the Nexus S has a fantastic GPU and the UI doesn't use it, that kinda sucks.
No. Honeycomb will likely be the first to use GPU UI rendering.
The N1 w/2.2 is extremely fluid, with minor stuttering that is attributable to garbage collection (the thing is endlessly pausing to do GC). 2.3 includes a highly optimized concurrent garbage collection so that should strongly improve.
However honestly I think it's one of those things that matters a lot if you're, to put it impolitely, dicking around, but becomes irrelevant when you're using it as a tool day to day. Similar to how everyone using a Kindle for the first time hits back and forward a bunch of times and complains about page flip times, whereas to actual readers it just doesn't matter.
The difference is that striking soldiers can cause irreparable harm while the damage done by teachers can be easily repaired.
Let's say some soldiers start striking during a war for better pay. That puts the country in an immediate danger as the enemy advances and potentially causes irreparable harm (a change that cannot be undone). Even if one can retake the positions given up during the strike, many will die doing so. Clearly, there's a problem there.
Now let's say some teachers start striking in September. Accepting Adams' premise that an uneducated populace is a threat to national security, what affect do striking teachers have on national security? Well, maybe those students have class from October through July that year or they loose a few weeks of schooling. The difference is that the harm is not irreparable. The harm is merely a delay or, at absolute worse, a very marginal harm. In fact, any harm is probably about the same as moving from one school district to another. Different schools teach in slightly different orders and in a slightly different way and so when moving schools, a student might loose out on a small portion of learning just as they would during a strike.
Others can argue whether unions are good or bad, but I think there's a clear difference between the irreparable harm that can be caused by a military strike when compared with the reparable harm caused by a teacher strike. So, rather than September to June, students have an October to July school year one year. There's a distinct difference there.
Now, before one says, "well, what about strikes during peacetime", the military operates under a readiness principle. While I don't believe that countries aren't lining up to attack the US, the whole point of a military is to be ready for that possibility. So, it's peacetime, the army is on strike and then someone attacks. By the time you order the strike to stop (due to it no longer being peacetime), irreparable harm could already be done. Striking soldiers wouldn't be on bases doing drill exercises. They might be at home thousands of miles away from. In fact, Israel fought a war somewhat like this. During Yom Kippur, religious Jews don't eat, don't use electricity, and don't work. That's the perfect time to attack - most of the military was home and wouldn't get fast word that an attack had occurred and some wouldn't do anything about it even if they heard for religious reasons. Suffice it to say, a union on strike could provide an attacker with a similar advantage. Soldiers aren't on base, some might not come back if ordered to end the strike, some might not hear about it immediately, etc.
The difference is irreparable vs reparable harm. A strike by a teachers' union can be repaired by teaching a bit into the summer and the harm caused by that lack of teaching already happens to students when they move schools and have a small mismatch in the curricula. I really love the premise that education is essential to national security because I agree that trade and education create stability that means war is very unlikely. I just don't see the harm caused by a teacher strike to be the same as the harm caused by a military strike.
I think it's probably got more to do with an (understandable!) nervousness by policymakers about what would amount to a second semi-official chain of command inside such an organisation. The union leaders would have an awful lot of leverage over the government and I can't see any politician giving that away willingly.
Police, Firemen and Hospital Workers operate on the same readiness principles and are still allowed to unionize. The National Labor Relations Board strictly regulates how and when these workers can strike.
I find it very difficult to believe that the military and NLRB couldn't find a set of rules to govern military strikes (doh!) that wouldn't jeopardize our national security.
I'm not saying that I think the Military should be able to unionize, they work under a whole different set of rules and values then the rest of us. I just find your argument a bit lacking.
Beyond the fact that the doctor's action is going to have the opposite effect of what he wants, there's a bigger issue. The rise of "reputation" sites like Yelp presents us with new challenges. Previously, it was a bit easier to manage one's reputation. A past bad act or customer complaint wouldn't be immortalized forever on the internet. Even our own postings get archived for anyone in the future to see (and people are starting to care about what they post on sites like Facebook).
On one hand, this is problematic if the statements are libelous (and I'm not saying that they are in this case, but libelous statements do happen). The Streisand effect means, to an extent, that we don't want to draw attention to libelous things said about us since it will bring more attention to them. Whether the claim is true or false, fighting it might bring us greater harm. And some statements are untrue.
On the other hand, this might change the way we view people. Right now, we tend to think that a (singular) bad mark is indicative of a pattern. The thought is that if someone was arrested once, they likely did many other bad acts that they weren't caught for. Likewise, a few embarrassing photos on Facebook are sometimes thought of as evidence that someone wouldn't be a good hire. However, if we find that more and more of our lives are documented, maybe people will stop thinking that a blemish or two on a person's record means that they're a bad person - it could just mean that they're human and interacting with the world.
Personally, I wonder if this fear of documentation might be holding us back. Someone doesn't want their failure documented and so they privately don't try for something. Has anyone else not launched something because of fear of criticism - criticism that would be immortalized online?
I remember an MIT (Sloan business school) professor giving a presentation and warning the attendees not to take money from those who were giving you just enough to fail. It sounds like you're in a hard place and, as such, the investor is going to get more than the money is probably worth. It also sounds like you've already come to the conclusion that this isn't a great deal.
It's a little hard to judge. Are you on to something big like Twitter or Groupon where you'll be a commonly-known-name or more on to something big like many startups that produce nice products, but aren't going to change the world. For example, Tarsnap is one of my favorite things, but it won't become a household name. Likewise, how close are you to having a product that you could show off, having users, etc? If you're closer to launching, you might be able to launch or private beta Gmail style and get better terms from potential investors who see a product.
The impression I get is that you've already put a lot of work into this and you're coming up just a tad short. You're negotiating from a place of necessity and that isn't a good place to be.
The investor doesn't sound like a friend-type, but rather someone looking to control. If you aren't ready now, will you be ready when that investors investment runs out? Maybe you're 75% there and will be 95% there when the investment runs out? Just in time for the investor to swoop in and take the company from you when you need more money? I'm not saying this will be the case, but if you're going to be giving up that much, you should be absolutely sure that you can launch within the time the investment will give you.
This seems like a classic case where many people tried to cut corners and they're now trying to claim that they're the victim.
The professor claimed that he wrote the tests while he was actually using pre-fab tests. One can argue that the pre-fab tests are as good or better than the ones he would write himself and, as such, there's nothing wrong. Except that passing off another's work as your own is usually known as plagiarism. With that declaration, it is perfectly conceivable that students would expect that these pre-fab tests are things from the textbook manufacturer that they could study off of. If that is the case, there is no generational disconnect about cheating and the premise of the article is overblown.
Personally, I'm very sympathetic to the students. Logic tells me that if you're cheating, you try not to spread it around to a group of 200 - most of whom you won't know or trust. I mean, there are two ways it could have played out:
1) You send the exam (or it gets continually forwarded down the line) to 200 people telling them that it's cheating (acting with bad intent). Odds are that one of those 200 is going to tell the professor that you're cheating before the exam. Plan foiled.
2) You send out the pre-fab exam to people thinking it's just a study guide from the teacher's edition of the book since the professor makes up his own exams. It gets widely forwarded because, hey, awesome study guide! Then it turns out that it's the actual exam questions and someone tells the professor that.
#1 seems more believable because it came to light after the exam rather than before. It seems plausible that the professor had been lazy for years using pre-fab tests and by chance got caught this year passing off pre-fab tests as his own. Rather than say, "well, unfortunately a lot of students got a copy of the test before since I wasn't making them myself and we'll have to re-do the midterm", the professor tried to defend himself by attacking the students. By accusing them of cheating, it wasn't his laziness that caused a re-take of the midterm, but cheating students. He shouldn't be blamed for wasting time, it's cheating students.
At the beginning of the article, I really disliked the students - people who didn't want to put in the work trying to get a good grade they didn't deserve. It's possible that's what they were. However, I can't see any evidence that indicates that's the case. All of the evidence seems to point the other direction.
* The professor said he made the tests
* The situation came to light after the exam, not before and one would think someone of the 200+ would snitch before the exam
This isn't a generational disconnect on what constitutes cheating. The students aren't defending themselves saying that it's ok to get a copy of the exam beforehand. The soul searching that the professor needs to do is around his exam preparation.
Let's say there's an open-source econ exam online (someone's written and published it). I, as a professor, print off 600 copies and give it in my class. It just so happens that many of the students, while looking for practice materials, found that exam. It's my fault for using a public exam.
In this case, the professor was using semi-private materials. The questions seem to be from the teacher's edition of the book (you know, the kind with the answers already written in). Yeah, it's not "public", but it isn't quite private either. If students think that you're going to be making your own questions, maybe an exam in the teacher's edition of the book they're using seems like the perfect practice test. It's the material you've been covering, but not the exact exam.
I don't want to sound too harsh on the professor, but sometimes you have to own up. Saying, "I thought that the teacher's edition materials wouldn't be available to students. Unfortunately, they were" makes me feel bad for the situation that both parties are in. It's a little sketchy whether it's ok to grab the teacher's materials of a textbook if you're a student. Clearly it's not all roses - you know that it wasn't written for you (a student), but if it doesn't affect the course of the class it isn't so bad, right?
This feels like when Harvard Business School denied entrance to students who looked at whether other people were admitted. Phil Greenspun wrote about it (http://blogs.law.harvard.edu/philg/2005/03/08/). Baiscally, they gave students a URL that had a code in it (with no check). So, students typed in example.com/admitted?stud=12345 and saw whether they got in. However, they could just change the number and see other people and were accused of hacking. They blamed the students for what was their error when, really, their disclosure of admissions info without protections might have left them open for a lawsuit. And it isn't just "hacking", curious users wondering whether their software was really so bad and those who made typos could cause problems. Imagine that I'm #12346 and you're #12345. I accidentally type in 12345 and pull up you, realize my mistake and pull up me. Now they think that you looked at you and then "hacked" the system to look at me when you're innocent all along.
In this case, even if a few students had malicious intent, it's highly unlikely that a secret conspiracy of 200 students of malicious intent could happen and the vast majority just thought they were getting a practice exam. Now, the professor and the university want to make them out to be immoral cheaters to cover themselves. I'm not saying that getting access to teacher editions isn't problematic and morally above reproach. However, if you're operating under the assumption that the professor isn't using it, it's understandable and certainly not the type of immorality that the professor is trying to paint.
I'll leave the professor's side out of it, as you've covered it adequately, but on the student's side, even if they did think they were just getting a practice test or study guide, it became cheating the moment they realized it and failed to inform the professor that they had already seen the test.
That's exactly what I failed to articulate. I knew I was missing something because the students didn't feel blameless to me. That's why it feels like a situation where each side is trying to yell at the other rather than both accepting responsibility for messing up a little.
It just makes me sad because everyone is posturing. Rather than saying "well, life happens and things go wrong and we'll work together to fix them" they're working on who is at fault. The professor wants the students to admit "guilt" or face sanctions for cheating because then things are found in his favor. The students would rather blame the professor for being lazy and not making up the test himself when they should have come forward once they recognized that they had seen it. It's unfortunate when we (and I've done it too) try to cover ourselves rather than working toward a solution.
Having seen the video, and some of the earlier articles about this issue, I don't see anywhere that the professor stated that he had written the test himself. I certainly can't find it in my nature to justify the student's actions simply because he used a set of stock test questions for the test.
The sheaf of papers he holds up as the pool of test questions was easily over 100 pages long; not something you would find in the teacher's version of a text book. It would appear to be an additional resource made available to the professors.
Also, from the professor's statements in the video and the earlier articles, it doesn't appear that this resource was ever meant to be public. This indicates that the resource was obtained through deceptive means from the publisher by the students, then disseminated.
This is a wonderful in-depth article and the PostgreSQL team deserves a lot of kudos for the work they put into this. I'm looking forward to using it. The thing I find interesting that MySQL offers is multi-master and circular replication. Cal Henderson noted that Flickr uses multiple masters replicating to slave servers in their setup during DjangoCon 2008 (a great presentation if you haven't seen it: http://www.youtube.com/watch?v=i6Fr65PFqfk).
Granted, multi-master setups aren't needed for most sites and PostgreSQL's new WAL replication and the fact that it makes sure that the WAL is written on the slave before the transaction is committed on the master means that data integrity should be top notch. It's exciting to see PostgreSQL's development progressing so well.
it makes sure that the WAL is written on the slave
before the transaction is committed on the master
means that data integrity should be top notch.
That's not my reading of the documentation. The slaves may delay applying a transaction for some configurable period of time to allow conflicting read-only queries to finish. Slaves can also disconnect and reconnect at will without stopping the master.
The slaves may delay applying the transaction, but the data is already on the slave machine. As the article points out, the data gets written to the WAL on the slave before the transaction is committed on the master. The slave might not update the database from the WAL immediately, but the data is on the slave in case the master goes down.
Basically, it means that there is the potential for replication lag and so you have to make sure you query from the master for just-updated data. However, it also means that if the master goes down, any data committed on the master is already on the slave in the WAL, just possibly not applied yet. So, it doesn't isolate you from replication lag, but it is a nice thing when it comes to making sure that the data remains intact even if a complete failure of the master occurs.
As the article points out, the data gets written to the WAL on
the slave before the transaction is committed on the master
Then the article is wrong, it doesn't. I can reboot or kill the slave and the master will still happily accept transactions. It's also pretty clear from tcpdump that the data only goes on commit, and it's one way from the master->slave.
On the first day, people stored passwords in plain text. If someone got access to your database, they had all the passwords.
On the second day, people decided to hash the passwords so that people couldn't unencrypt them. Then the attackers created rainbow tables that correlated each hash with its associated password since the password would always hash to the same value (so you have "109BCA" in your database, but they have a table that has that hash and that "12345" hashes to that value).
On the third day, people decided to salt the hashes to render the rainbow tables ineffective. Now, each password would hash to a different value so they couldn't just look up the password for a given hash. However, as computing power increased it became easy to just brute-force the password. You have the hash and you have the salt so you can just try every password with the hash until you get a match.
The problem is that code like that has gotten really cheap to run and it's incredibly parallel (you can have loads of machines each take a piece of the workload - oh, and hopefully no one will make a joke that you'd never write that in Ruby; I just felt that would be easy pseudo-code to demonstrate). You can just try combinations of passwords at random, but there are lists of more common passwords that you can try first making it go even faster. Hashing algorithms are meant to be fast because they're meant to be usable on a large piece of data. As such, it also becomes fast to brute force check all sorts of combinations.On the fourth day, people started using things like bcrypt because bcrypt was made to be slow. The fact that bcrypt is slow means that if someone wants to brute force it, it will be very slow going for them. If one can check 10,000 SHA1 hashes in a second, but only 10 bcrypt hashes in a second, it will take 1,000x longer to crack the bcrypt stored password (those are just made-up numbers as an example).
Salting is better than not salting because they have to brute force the password. However, as computing power increases it isn't so much better because brute forcing is becoming easy. One needs to use a slow algorithm to make sure that cracking it will also be slow (prohibitively slow). Bcrypt also allows you to specify how much work you want it to have to do. This way, as computing power increases, you can increase the amount of computing power needed to compute the bcrypt. By contrast, hashes are meant to go fast and so every day they're getting less secure.