The background image in the e-mail collection form can be opened in a new tab. It's full-size, so you can see the ToC without buying to book or signing up.
"On March 29 2011, Tesla filed a lawsuit to stop Top Gear’s continued rebroadcasts of an episode containing malicious falsehoods about the Tesla Roadster. Top Gear’s Executive Producer, Andy Wilman, has drafted a blog to present their side of the story. Like the episode itself, however, his proclamations do more to confound than enlighten.
Mr. Wilman admits that Top Gear wrote the script before filming the testing of the Roadsters. The script in question, concluding with the line "in the real world, it absolutely doesn’t work" was lying around on set while Top Gear was allegedly "testing" the Roadsters. It seems actual test results don’t matter when the verdict has already been given -- even if it means staging tests to meet those predetermined conclusions.
Now Mr. Wilman wants us to believe that when Top Gear concluded that the Roadster "doesn't work," it "had nothing to do with how the Tesla performed." Are we to take this seriously? According to Mr. Wilman, when Top Gear said the car "doesn't work," they "primarily" meant that it was too expensive. Surely they could have come to that conclusion without staging misleading scenes that made the car look like it didn’t work.
Mr. Wilman's other contentions are just as disingenuous. He states that they never said the Roadster "ran out of charge." If not, why were four men shown pushing it into the hangar?
Mr. Wilman states that "We never said that the Tesla was completely immobilized as a result of the motor overheating." If not, why is the Roadster depicted coming to a stop with the fabricated sound effect of a motor dying?
Mr. Wilman also objects to Tesla explaining our case, and the virtues of the Roadster. Top Gear has been re-broadcasting lies about the Roadster for years, yet are uncomfortable with Tesla helping journalists set the record straight about the Roadster’s revolutionary technology.
Mr. Wilman seems to want Top Gear to be judged neither by what it says, nor by what it does. Top Gear needs to provide its viewers, and Tesla, straightforward answers to these questions."
I hadn't heard that Tesla calculated that range, the direct quote from the show used the pronoun 'we' and I thought part of the suit was because the 55 mile range statement from top gear defamed Tesla and made them look as if they were lying about the range.
The overarching point is that there aren't recharging stations as ubiquitous as the fuel pumps and that it takes anywhere between 30 minutes and 12 hours to recharge the batteries. Was the point proven clumsily, yes, it's Top Gear, they're nothing if not clumsy especially Jeremy Clarkson.
The point still stands. Also note that Nissan itself says that repeated fast charging of the Leaf's battery will cause it to be unable to hold a full charge. The fast charge option being the 30 minute, 80% charge option which is the fastest you can charge a currently-in-production electric vehicle.
These are the facts and there's no agreeing or disagreeing with them. Top Gear presented them in a clumsy way, but they're still facts.
Actually, had you bothered to watch to the end of that episode, the point they made was about HOW a car is driven and now WHAT car you drive.
They drove the Prius as fast as it would go, because someone who buys a Prius will do that and at that speed it would be less efficient than an M3 which is the "sports car" you mention.
And I agree with that point. If I flog my Civic, it'll return much smaller mileage than if I drive carefully and efficiently. So, again, Top Gear's point is proven in that, people aren't willing to think critically and live in a reality.
Exactly the point I took away from that test, it matters much more how you drive (calmly rather than aggressiely) than the car that you drive. This was even confirmed on mythbusters in their driving calm versus driving angry 'myth'.
To be honest, Mongo's execs have done pretty much the same thing. As I said in another comment, the Changelog episode on Mongo was very illuminating with regards to the marketing tactics of 10gen.
If they're doing the same thing, that's just as shitty. But I've been meaning to listen to that episode of the Changelog for awhile now, so thanks for the reminder!
The 'safe' feature isn't on by default, yet. Also, the benchmarks 10gen publishes are based on default setup, so basically, Mongo writes to RAM, therefore it's fast.
I love Mongo and am using it in a few apps, but their marketing does blow, I admit.
Also, Eliot Horowitz came out and bashed on Riak's eventual consistency promise by basically misleading devs into thinking that writing to MongoDB will always result in 'full consistency'. Listen to the ChangeLog episode on Mongo to hear that.
Riak and all the dynamo-style databases are really distributed key/value stores and I think, you know, I've never used Riak in production, but I have no reason not to believe it's not a very good, highly scalable distributed key/value store.
The difference between something like Riak and Mongo is that Mongo tries to solve a more generic problem. A couple of key points: one is consistency. Mongo is fully consistent, and all dynamo implementations are eventually consistent and for a lot of developers and a lot of applications, eventual consistency just is not an option. So I think for the default data store for a web site, you need something that's fully consistent.
The other major difference is just data model and query-ability and being able to manipulate data. So for example with Mongo you can index on any fields you want, you can have compound indexes, you can sort, you know, all the same types of queries you do with a relational database work with Mongo. In addition, you can update individual fields, you can increment counters, you can do a lot of the same kinds of update operations you would do with a relational database. It maps much closer to a relational database than to a key/value store. Key/value stores are great if you've got billions of keys and you need to store them, they'll work very well, but if you need to replace a relational database with something that is pretty feature-comparable, they're not designed to do that.
Can you please explain this for a case where there are multiple replica sets, the database is sharded and nodes are across data centers? What's sacrificed? Something must be.
When we talk about consistency, we're talking about taking the database from one consistant state to another.
With replica sets, we're still only dealing with one master. We can get inconsistant reads from the replicas, but we're always writing to a single master, which allows that master to determine the integrity of a write.
With sharding, we're still only dealing with one canonical home for a specific key(defined by the shard key). (besides latency, I'm not sure how datacenters would affect this)
What we're giving up in this case is availability. If an entire replica set goes down, we can't read or write any data for the key ranges contained on those machines. This is where Riak shines.
With Riak, any node can accept writes, and nodes contain copys of several other nodes data. What that means is, as long as we have one node up, we can write to the database. Because of this, there is the possibility of nodes having different views of the data. This is handled in a number of ways(read repairs, vector clocks, etc). Check out the Amazon Dynamo paper for more info, great read.
I'm sure I'm missing some stuff, but I think that covers the gist of it.
EDIT: One thing that I want to make clear, I don't think that one architecture is better than the other. They each have their own pros and cons, and are really suited to solve different problems.
None of this is guaranteed by default. By default, writes are flushed every 60 seconds. By default, there's no journaling. How can one claim full consistency if the the former two points are true?
Don't get me wrong, I love mongo. I'm building a web app backed by it. But the marketing talk is grating, which whT this post nails.
I think those two issues are orthogonal to consistency. In ACID, consistency and durability are two different letters and CAP doesn't even mention durability. Are you referring to another definition of consistency?
How is flushing a write every 60 seconds orthogonal to consistency? If there's a server crash between the write to RAM and the subsequent flush, the data is lost, is it not? How do you guarantee the data is there in that case?
That would mean the data set was not durable, it doesn't speak to consistency at all. DB consistency is about transaction ordering. Transaction 1 always comes before transaction 2, but 2 may exist or not as it pleases. Transaction 1 must be present if 2 is present.
If a slave can continue serving reads whilst partitioned from a master that continues to accept writes then you cannot guarantee consistency. If a slave cannot serve reads when partitioned then you aren't available. If a master cannot accept writes when partitioned then you aren't available. See this excellent post from Coda Hale on why it is meaningless to claim a system is partition tolerant http://codahale.com/you-cant-sacrifice-partition-tolerance/.
I interpreted "what is sacrificed?" as asking which letter of CAP MongoDB was giving up. Coda's article actually explains exactly the tradeoffs MongoDB makes for CP:
-------------------
Choosing Consistency Over Availability
If a system chooses to provide Consistency over Availability in the presence of partitions (again, read: failures), it will preserve the guarantees of its atomic reads and writes by refusing to respond to some requests. It may decide to shut down entirely (like the clients of a single-node data store), refuse writes (like Two-Phase Commit), or only respond to reads and writes for pieces of data whose "master" node is inside the partition component (like Membase).
This is perfectly reasonable. There are plenty of things (atomic counters, for one) which are made much easier (or even possible) by strongly consistent systems. They are a perfectly valid type of tool for satisfying a particular set of business requirements.
In a replica set configuration, all reads and writes are routed to the master by default. In this scenario, consistency is guaranteed. (You can optionally mark reads as "slaveOk", but then you admit inconsistency.)
This does sacrifice availability (in the CAP sense), but I haven't heard anyone claim otherwise.
"In a replica set configuration, all reads and writes are routed to the master by default. In this scenario, consistency is guaranteed."
One would hope that reading and writing a single node database was consistent. This is table stakes for something calling itself a persistent store. Claiming partition tolerance in the above is the same as claiming availability. The former claim has been made. Rest left as exercise for the reader.
If a slave is partitioned from its master, it won't be able to serve requests. (Unless the request is a read query marked as "slaveOk", in which case you admit inconsistency.) I highly doubt anyone would claim otherwise.
The implication is that the people for whom eventual consistency is not an option will never reach a data set size or availability requirement that'll require them to use replication and experience the lag (and eventual consistency) involved.
Among major features touted are auto-sharding and replica sets. I don't know if the implication is that it's only for web apps/websites that won't need those
In the sharded case, at any given moment each object will still live on exactly one replica set, which will have at most one master. You can do operations (such as findAndModify http://bit.ly/ilomQo) that require a "current" version of an object because all writes are always sent to the master for that object. You can also choose to accept a weaker form of consistency for some reads by directing them to slaves for performance. This decision can be made per-operation from most languages.
As for trade-offs: Relative to a relational db, there is no way to guarantee a consistent view of multiple objects because they could live on different servers which disagree about when "now" is. Relative to an eventually consistent system, you are unable to do writes if you can't contact the master or a majority of nodes are down.
If quality was your only concern, you'd just charge a one time $5 fee. So I'd say greed is at least somewhat involved.
And yes giving away coupon codes is good and well...but then you run into the quality problem that you use as an excuse to charge from the start.
Don't get me wrong, there is nothing wrong with greed...provided it doesn't cause you to kill your business before it gets the chance to get off the ground.
Business exist to earn money and the best way to convince others of the value of your service is to charge for it. You can argue that it isn't good business sense, but calling it greed is a judgmental call that you have no evidence of.
I think a good strategy is to get your minimum viable product out as soon as possible and call it a beta. Don't charge for the beta and provide plenty of warning to existing clients by signalling early on your intention to charge for some/all features when you are out of beta. After a private beta period, this is how I'm planning to handle Mighty CV, a resumé building app with hacker leanings that I've been working on. I'm looking for private beta users to kick the tyres a bit, so if you feel inclined then you can sign up for the private beta at http://www.mightycv.com.
I always remember being impressed with the way Heroku did things in the early days. After beta feedback it must have become clear to them that it made sense for them to rewrite from the ground up. This left them with a beta platform which they gracefully continued to support, renamed herokugarden, whilst also rolling out the paid for service. They then provided plenty of info on how herokugarden users could migrate to the new Heroku platform for free too. I'm sure they learnt a lot early on about what direction they needed to take the Heroku platform. Anyone remember the web based code editor? Without the early feedback from beta users perhaps they would have pushed more in that direction instead of changing course towards the Heroku we all know and love today.
It's not greedy, it's just not a good idea to try and capitalize on a service that isn't running on all cylinders yet. Having said that, definitely a chicken and egg problem. I would worry about getting users before starting to charge for the service. Once a larger number of members has been obtained thinkings of ways to monetize it should be fairly straightforward.
The typical employer expects to pay to post a job listing. Yes more traffic is better, but I bet this site will start getting major traffic within a month. Being a specific niche I also predict it will get some major google SEO juice before to long. So that egg better start running or the chicken will catch up!
I don't think the chicken/egg problem applies here. It applies to e.g. dating sites because there are other options (e.g. other dating sites, bars, etc.). In this case, a ton of people want to work remotely and there is no central place to find that. Now I know of one place, so I'll definitely be checking it often.
I doubt running this site costs so many resources that they NEED to be charging right now.
another option would be to leave the charge now link and go out to craigslist, dice, etc and be like hey! want to have your job listed on our site for free?
They might seem a lot more expensive, but when you're looking to hire a new employee (an expense on the order of tens or hundreds of thousands of dollars), the difference between $75 and $500 to advertise the position isn't really that significant.
In addition to what csomar said, for a small business or someone looking to hire an independent contractor, I think many advertisers would consider $425 to be a huge difference. But, I think a lot of potential advertisers would consider $75 significant in the first place.
But for me, charging advertisers a significant fee is valuable to establish that they're serious about hiring. That's the rationale behind the fee structure on my site http://WheresTheRemote.com/ . To me, an advertiser paying a fee of something like 1x or 2x the hourly rate they're advertising (for an independent contractor or employee, respectively) is a token of their sincerity about wanting to hire and pay the rate they advertise, which the site requires them to include in the ad. Conversely, unwillingness to pay such a fee makes me concerned that they would just waste the time of the job seekers visiting my site and I don't want to publish the ad, since quality is an important goal for my project. Of course, having a decent amount of traffic would help establish the value proposition for advertisers to pay such a fee.
You are here assuming that this source will bring enough traffic to be your only source? Otherwise, you'll need to advertise in many sites and at that time the price makes a difference.
This is certainly the reason I went to see it. However, not only was I underwhelmed by the regurgitation of the Pocahontas story, I was extremely underwhelmed by the 3D. The only effect that made me think the film was "realistic" was the flies that kept flying around when they were in the forrest.
Avatar did exactly the opposite of what intended to do and that is, it turned me off of 3D movies not on to them. Sure, it made the money it did, but at least for me, it didn't do anything to advance the technology nor promote it.
Yea, that was the exact same effect Avatar had on me. I'm really glad I saw it 3D and now feel no desire to see another movie in 3D for at least a couple years (when hopefully some new neat tech has come along which I'll feel compelled to check out).
Go to a real IMAX theater and watch a real IMAX movie. They tend to be science documentaries, like Hubble 3D. Compared to that, Avatar just seemed fuzzy and out of focus to me.
On an unrelated note, that whiteboard graph disturbed me a bit. Does every YC startup hope to get bought out? Why would that even be a goal or something to aspire to?
It seems to be part of the definition of a startup that the end goal is either acquisition or IPO. PG has said (in an interview, might have been on Mixergy) that YC can't make a profit on a startup unless it has an exit.