I own a 2012 Nissan Leaf which has had major battery degradation (first battery went under 70% in only 33k miles, second battery currently around 85% at 45k miles = 78k miles total on both). But on this site, the 2012 Nissan Leaf has less than 2 years of data and shows zero degradation during that time. So it seems like the data is questionable.
Yes, the Leaf uses passive battery cooling as opposed to Tesla and others with active cooling. Their first-gen battery chemistry was also worse in hot climates. The newer battery is a different chemistry that has held up better.
Alternatively, it could be used a state of charge tracking mechanism that conceals battery degradation. Say the battery has 100KWh, but the remaining energy gauge reads zero after 80KWh are expended. You could be degraded to 90KWh but you wouldn't perceive any reduced usable energy until it degrades below 80KWh.
Yes, and my point is that the mechanisms to prevent battery degradation can often end up masking the degradation. This isn't nefarious, it's the byproduct of good state if charge management.
I have a 2015 Leaf with 45k miles on it. Battery state of health is 90% according to Leafspy. I’m in the SF Bay Area. Maybe it’s because of the more temperate climate here, but we’ve just had zero battery degradation problems.
My gut feeling is temperature and constant deep discharging probably explain most of the issues with Leafs. Like a lot of things there is a curve with a knee and it's easy to drive a leaf so you're on the wrong side of it.
I don't really know how Leafs batteries are connected but one could also expect that a smaller battery pack has less redundancy. If due to bad luck a couple of cells go bad you lose a lot of capacity. More likely to happen the less redundancy you have.
I would call Leaf's cooling system as "body contact passive cooling". Calling it as "air cooling" is harmful. Proper air cooling with A/C should be significantly better for health.
I will mention that the leaf suffers from another problem that sets it apart from more recent ev's like the tesla.
I don't really think it's as much temperature management or some defect.
I think it's the size of the battery - it's small. You can figure out the lifetime with simple math. A tesla with 250 miles range with 1000 cyles would have gone 250,000 miles. A leaf with 75 miles range * 1000 cycles will have gone 75k miles.
And telsa recommends keeping the battery between 20 and 80% and I believe the slider says "daily" is 60% and "trip" is 90% (for occasional use)
A 2012 leaf will default to 100%. There is a way to change this to a lower value, but it's well hidden in the system settings and I believe you have to agree to telemetry to set it.
And if you DO set it on the leaf, you will limit your range to about 45 miles (of ideal driving)
Anyway, the idea is that newer cars have bigger batteries, a much higher lifetime mileage, and no need to cycle the battery charge so high, so low, or as frequently.
(a tesla driven 200 miles a week might cycle the battery once while a leaf would cycle every day)
>I own a 2012 Nissan Leaf which has had major battery degradation (first battery went under 70% in only 33k miles
A battery isn't like a tank of gas; it's never truly off. It's full of reactive chemicals and powerful solvents, and it degrades over time even if you don't cycle it. How long did it take your battery to drop?
The 2011/2012 Leafs had major problems with battery degradation in hot climates so much so that Nissan was forced to offer a free battery replacement. So I got it replaced for free. The newer battery is lasting better, but still not great. I think Nissan screwed up by sticking to passive battery cooling.
For reference I live in southern Florida so we have hotter than average climate.
I had a friend with a 2012 Leaf that experienced similar degradation. It was a problem with the early model year and Nissan's choice to use passively cooled batteries.
Every discussion or article I have read on dependent types assumes that the reader is a mathematician. I usually make it only a few paragraphs in before I am completely lost.
Right now the only thing that seems clear is that dependent typing adds significant mental burden on the programmer to more thoroughly specify types, and to do so absolutely correctly. In exchange for that burden, there must be practical (not theoretical) benefits but they do not come through clearly. All of the examples I've seen are about "index of a list is guaranteed in bounds". That is such a minor and infrequent bug in practice that it does not justify the additional complexity. There must be more, that I'm just not seeing.
Is there a "Dependent types for non-mathematicians" article out there somewhere, where I can learn about patterns and practical applications?
> Right now the only thing that seems clear is that dependent typing adds significant mental burden on the programmer to more thoroughly specify types, and to do so absolutely correctly.
I'd say that's backwards: rather a dependently typed language relaxes a huge restriction that most programming languages have, that only certain things can be used as types. Any program that's valid in language X is also valid in a dependently typed version of language X (e.g. most Haskell functions translate directly into Idris as long as they don't rely on laziness).
Dependent types make it easier to encode properties that you care about into the type system, i.e. rather than the programmer having to get them right, the compiler can check them for you. Far from burdening the programmer, it lightens your mental load.
> All of the examples I've seen are about "index of a list is guaranteed in bounds". That is such a minor and infrequent bug in practice that it does not justify the additional complexity.
Most code is in terms of domain-specific things, and so the types you use are also domain-specific. In my experience every code bug (as distinct from "behaving as specified but not as intended" bugs) boils down to "we thought x, but actually y" and can be avoided by making more precise use of a type system. If you have bugs that hit production, you can probably avoid them with better types. If you avoid production bugs by using tests, you can probably replace most of those tests with types and they'll become more concise and easier to maintain. But of course the specifics of what types to use will be specific to your domain; examples like list indices are common because they're one of those rare types that virtually everyone uses.
Right now the only thing that seems clear is that dependent typing adds significant mental burden on the programmer to more thoroughly specify types, and to do so absolutely correctly
It's basically the opposite of this.
If a language has dependent types, you can continue to use it in the same way as a language with does not have dependent types.
BUT, there's a lot of programming patterns that previously couldn't be implemented in a type-safe way, that now can be. This also gives a ton of opportunities for libraries to implement abstractions that weren't expressible before.
Dependent types are not that complicated, actually, and they don't add any burden to programs that don't use them.
I love this tool, have been using it obsessively on football sunday :)
Question (or suggestion): would it be possible to choose the desired outcome? By that I mean, my team (the Lions) is already in the playoffs, so all changes I make show 100's in all columns. But what I'm really trying to narrow down is whether they will win the division, or whether they will have home field advantage throughout. Perhaps once a team has met a particular goal (i.e. 100% chance of making the playoffs for example) the remaining games could be ranked vs. their percentage chance of reaching the next-higher goal?
I'd like something running on the CLR. Not because I think the CLR is very special; just because I really like Haskell but I have to work within a large ecosystem of existing .NET code.
I've been playing with it some. It's nice, but I still prefer Haskell. I want to be able to separate pure from impure code, but F# doesn't seem to enforce that. (Unless I'm missing something?)
It doesn't. In fact, F# feels like it started off as a fairly pure functional language but some Microsoft PM said "it has to support every API in .NET" and so a lot of OOP stuff was bolted on the side. It feels like 2 different worlds in one language. You can easily get by never using the impure stuff but it's still there, taking up space.
F#'s design was almost completely handled by Don Syme, in MS Research Cambridge. It is highly unlikely some PM jumped in and insisted it must do things.
The OOP part is rather coherent; I'm not sure what's bolted on - any examples? In fact, things like object expressions do parts of OOP even better than Java/C#. Overall the syntax for OOP is rather concise and neat.
Having proper interop with .NET and hence the OOP is ... pretty key to making a successful .NET language. Otherwise you lose one major point of being able to use .NET libraries.
> The OOP part is rather coherent; I'm not sure what's bolted on - any examples? In fact, things like object expressions do parts of OOP even better than Java/C#. Overall the syntax for OOP is rather concise and neat.
I'm saying I'm not sure why the OOP exists at all. Someone else pointed at that OCaml has those elements as well, and maybe that's the reason. Just in my own use, I never felt the inclination to use the OOP at all, and their existence felt like this part of the language that I didn't understand, didn't want to understand, but was there nonetheless.
F# has its origins in OCaml which has the same set of properties (ability to write impure code and has an object system, albeit one that not many people use). F# replaced that object system with the .NET one.
Frankly, I've had to maintain a huge C++-only codebase, and that's why I hope to never write another app in C++.
Yours is an argument for a high-level, typesafe, compiled-to-native and compiled-to-javascript language. But IMO the argument falls down when applied to C++, because it is not high-level, it is only marginally typesafe, and it doesn't compile to javascript well at all (tradeoff of either massive performance, or missing basic functionality like 64 bit ints).
It is very possible that no language will fit this role exactly for some time to come. But, to me at least, it is very clear that C++ is NOT the language for the job.
Yes I see your point to a degree, in a way we're working in a sort of "highlevel C++" most of the time at some performance cost: We have strict coding conventions in place which forbid low-level C/C++ stuff in high-level code (no raw pointers, no C-style arrays, no pointer arithmetics, no C library functions, etc...). Plus static code analysis and tons of runtime asserts. I can't remember the last time we had a buffer overflow or pointer-gone-wild. The performance hit of C++ compiled to JS is surprisingly small (1.5x native performance in Firefox, a bit slower in Chrome, but the gap is getting smaller, this is in the same ballpark as strongly typed bytecode languages like C# or Java).
IMO the root problem is that A) we have too many identities, B) those identities are rarely protected properly (sites don't hash/salt, don't have password expiration policies, don't use 2-factor auth), and C) managing those identities over time is nearly impossible.
I use lastpass, and it's great. But I didn't always use it; before I started, I used a couple of passwords everywhere. Recently some site which I haven't even used in years was compromised, and as a result, one of my "frequently used passwords" was potentially compromised. I had to spend hours going to dozens of websites and changing my password. Every site has a different way to change your password, and different policies for acceptable passwords, and most don't even make it easy/obvious.
I think something like Mozilla Persona is a good start, but not quite complete. Give me one, central place to manage my identity. The ability to control which sites have access to my identity. The ability to allow, or not allow, different sites to correlate my identity with each other. The ability to have my identity independent of my email address. Good two-factor auth for establishing identity, and good password management policies. Single-sign-on, even across independent sites, with just a click.
So the problem is that a proposal like this encourages people to do the wrong thing; i.e. ask me for a username and password - without two-factor auth, without considering whether I will be able to manage yet-another-password, without considering whether they should even be in the business of authentication themselves.
The assumption is that the login button in the browser will be accompanied by features such as random password generation and automatic sync in the cloud (LastPass does this to some extent), so that the user doesn't need to manage yet another password. The proposal is to make this happen without waiting for websites all over the world to standardize on a single third-party identity like Persona (or heaven forbid, Facebook Connect).
I don't think there's anything in my proposal that makes 2FA impossible. That can be written into the spec. Enter your tokens into a little textbox that your browser pops up when you click "Login" on a website that requires 2FA.
Although many people seem excited about single sign-on systems like Persona, I respectfully disagree, for reasons I wrote about in a different post [1]. You ask whether individual websites should be in the business of authentication, but I'd rather ask why anybody should be in the business of authenticating anybody else to third parties. I'm not opposed to keeping all my credentials in a single location, but I want that location to be inside my own devices. I'm not opposed to sync, either, but I want sync to involve full client-side encryption. I have a great deal of trust in Mozilla, but precisely because I love them, I don't want them ever to put themselves in a position where a three-letter agency can ask them to hand over any information about me, even if it's just a list of email addresses that I use with Persona.
What exactly is the point here? If you want to write C#, just use ASP.NET and MVC 4. You will certainly save yourself a lot of headaches, and you can still do everything async, etc. From what I can tell, the purpose of Node.js is to reduce the number of technologies that a web dev has to use and understand. You already need to use JavaScript on the client side, so using the same thing on the server side allows code reuse, knowledge sharing, etc.
It seems like what the author really wants is ASP.NET on the server, and Script# on the client side. Then the whole stack is C#. Or perhaps he would find TypeScript an acceptable middle-ground on the client. But I don't understand why you would choose Node.js on the server if you hate JavaScript, does not compute.
MVC is actually pretty damn clunky. It has a lot of built-ins which are just frustrating (like IPrincipal, ugh). Also there are loads and loads of weird quirks that pop up as soon as you start trying to do anything like returning JSON.
I think one of the reasons node.js is so great is that it just cuts out almost everything and gives you direct control over what is returned. MVC still mucks around with everything trying to be 'helpful' as it's really built on ASP.Net in the background.
To many of us, javascript is still one of the worst mainstream language around today.
Still, why not just make node.cs instead one wonders? Perhaps the existing ecosystem.
I personally find those "helpful" things to be truly, well, helpful. For example the input validation, anti-forgery token validation, simple cache control, etc.
Again, not saying you can't do these things in node. The two technologies can easily accomplish the same goal. But if your primary concern is avoiding JS, why would you choose a technology that is built entirely upon it?
Node.cs might make more sense, I agree. I have a feeling the existing ecosystem might bring its own problems with the author's approach, because your C# code may have trouble integrating with those existing libraries, if the underlying generated code does not behave as the Javascript library expects.
There's a load of little things that are gotchas, but an example that pops into my head is that it wasn't compatible with default jQuery for example. If you didn't ask specifically for a content-type of application/json, which jQuery didn't by default, it would throw a hissy fit. It meant mucking around with the ajax object meaning you couldn't use certain jQuery shortcut methods. Regardless of what you 'told' MVC to do.
So there's a bunch of things it's doing you're not even aware that it is nor have asked it to do.
Think that was MVC 3? I've been using it since the first version, it's much better than it started (the original JSON support was awful), but you still every now and then get a WTF moment.
For example I still really have no idea the 'right' way to return 404 or 500 error pages. I swear they change their mind every release.
It's not the return type, it was the request type. It didn't like if you requested it with plain/text instead of application/json. Which was jQuery's json method's default.
Yeah, but this is my whole point. Finally in version 4 they stop mucking around with the requests before they give it to you. Someone, somewhere in the depths of MS clearly believe they understand HTTP better than you do. And keep guessing what you really 'meant'.
Even though they quite clearly don't understand and actually really don't get HTTP. For example take the fact that it's nigh impossible to get the actual request body in ASP.Net. Who's bright idea was that?
In reality every single interface, every single framework they've produced so far has shown a woeful lack of understanding about the web in general and pretty much how it's used outside their world. And I say this as someone who's primarily programmed in VBScript, VB6, C#, Silverlight, ASP.Net and ASP.Net MVC.
I keep almost jumping ship and then they just kinda fix it and I stick around hoping they're not going to make the same mistakes. But they do, jeesus, MVCs ajax stuff is unsurprisingly fucking awful.
But that's the problem with MVC and anything MS led, they don't get the web, they don't get javascript, they keep making incredibly silly decisions.
A good example. Every time I hear 'unobtrusive' js, I just want to scream. They're the cause of this made up problem. No-one else was doing js like that in 2010, no-one else needed unobtrusive javascript. Just MS. There's no such thing as unobtrusive javascript, there's just not writing idiotic magic code like a fucking retard like MS constantly do when it comes to javascript.
And don't even get me started on their 'web services' or WCF. Both deserve to die in a fire.
TL;DR; I love C#, think it's the best language available today by far. I hate asp.net though.
The biggest thing I can think of is how it defaults to throwing an exception when attempting to return JSON for a GET request. You could disable this easily enough, but I never understood the rationale behind the decision, and it was a momentary frustration every single time. Hopefully they've changed that in MVC 4.
I'm genuinely curious which feature(s) of node you like better than ASP.NET. I have used both and find them quite similar as far as "raw" capability. ASP.NET however has all of the things you were asking for: great Intellisense, great tooling (for instance integration with Entity Framework), great libraries available (such as SignalR), code in C# or any .NET language (including F#), rich async support...
I like node too, but to me, the reason to choose it is either if you want to remain platform-neutral, or if you really like Javascript.
I would imagine he's referring to the non-blocking IO and evented model. ASP.Net and friends use threads for everything, which allows for heavier processing, but also is more restrictive in terms of concurrency.
I had the waveform surgery 3 yrs ago and it was one of the best things I've ever done for myself. My vision was not terrible before (-1/-1.25 + astigmatism) but afterward I have 20/15 in one eye, 20/20 in the other. The biggest aspect of the correction was that prior to the surgery, I had very bad night vision - halos and glare - that made it very difficult for me to judge distance or speed of oncoming traffic. Afterward, my night vision is very good, and those problems are corrected completely!
The surgery was, I have to admit, a very frightening experience though it did not hurt. The recovery was very quick and easy, and I am happy to report that after one month I never experienced dry eyes again and have had literally no negative effects from the surgery.
I used to wear contacts until my doctor said my eyes were growing blood vessels they should not have to compensate for the contacts blocking the eye from being exposed to the air, and that those might eventually lead to serious problems. Then I talked to a family member who had Lasik (non-waveform) and they loved the result, though it caused night halos for them. So I did the research and decided on the waveform procedure and I am really glad I did. I would definitely recommend it, but spring for the waveform procedure.
Another thing to mention is that my doctor told me that Lasik does not change anything about age-related vision problems, because those are more due to the inability of the eye to change focus than a malformed cornea. He says you are just as likely to need reading glasses at age 50 after Lasik as you are without it. The only thing they can do for older patients is an alternate procedure where they change the focus of one eye to nearsighted, and the other to farsighted. Apparently your brain soon compensates for that and biases to one or the other eye so that you see both near and far things in focus.