Gasoline is 100 times more energy dense than a battery. The Tesla roadster adds 800 lbs to the lotus elise upon which it is built. Batteries have a long way to go.
AFAIK the factor is more like 50x now (Tesla's panasonic packs vs. gasoline) [1]. The thing is, in order to have a fair comparison in energy density, you should compare with the weight of the whole drivetrain, which lowers the advantage of gasoline quite a bit (due to a heavy motor and transmission) [2]. You also have to consider the significantly lower energy conversion efficiency of ICEs (3-4x). All in all I'd expect the difference is more like 2x, otherwise it would be impossible to have EV cars with a more than two thirds of the range than ICE cars of similar size, weight and power (Tesla S vs. BMW M5).
Edit: IMO the main disadvantage of EVs today is not the energy density, with that they're almost on par with ICE. Instead it is the rate at which energy can be refueled. Driving a Tesla for long distances is still a hassle. That's why I see EVs mainly as daily commuters in the near future, while for weekend trips people may just rent an ICU car.
Only because they don't need it. Most people are comfortable with a ~300 mile range, knowing they can refill in 5 minutes. So most cars are built with 10-to-15 gallon tanks.
Build a car with a 30 gallon tank and you could easily be in the neighborhood of 1000 miles range.
But the beauty of adopting electric cars now is that when a better battery technology is developed it can be replaced in cars on an individual basis, whereas a reason we don't drive hydrogen cars today is because fuelling stations would need to stock that in addition to traditional fuels. Supposing nanowire batteries come out for cars: they can still use the same EV chargers.
A lot. Most of the upper midwest lacks cellular coverage. You need to drive to the top of a hill to even hit a gsm signal at one bar. The old analog signals were better but they're gone now.
My grand parents do not get celluar where they live in very remote Montana. Apparently ether Verizon/AT&T (i can't remember which) has been promising them Celluar/Cable service for over 5 years now, and has never come through with it.
Honestly i think the only reason they even pay for AOL is because they use it to keep in touch with relatives via email.
And it would, except that Charter communications owns rights to the cellular spectrum there. So once such a system were in place Charter would no doubt come in and take it away from you and then operate it at a profit if they could.
Left to their own devices, people would never upgrade. This is a security risk for MS. (Win 8 support ends January 2018). Chrome was championed for auto updating, and now all browsers do the same. Web apps can auto update whenever they want. Mobile apps auto update. Why shouldn't operating systems?
There's a difference between security and stability updates and major version changes.
I have no compelling reason to upgrade my Windows 7 machine to Windows 10; 7 performs all the tasks I need it to. I do not trust the privacy changes in Windows 10, which is more reason for me not to upgrade.
I've disabled the Windows 10 upgrade path, but otherwise keep it up-to-date on all patches.
What do I do when Windows 7 support ends? I'll cross that bridge when I get to it... but that bridge won't be to Windows 10.
Chrome has made some pretty major, breaking changes due to auto-updates. I opened Chrome one day to find that my VPN to work would no longer function due to sweeping changes in extension permissions. I would agree with the parent that Chrome's behavior is very similar to what MS is doing now, and probably for similar reasons, except that MS is actually giving warning before they do it. That doesn't make it any more pleasant, but it still will result in a more secure infrastructure.
I agree with your statements about Windows 10. In my case, I don't use my Windows laptop for day to day anymore. It also told me that Windows 10 is not optional and it will happen next week and there is nothing I can do about it. It sits connected to my TV via HDMI cable only to stream DRM related content such ad HBO Go, Netflix, etc.
At least once a week, it takes about 30 minutes to apply updates. Where my computer would restart several times.
Fortunately that use case is quite well covered by alternative solutions, like Roku, Chromecast, Fire, or a homebrew HTPC running something like Kodi or Plex. The Raspberry Pi is a popular host device for that.
"DRM related content such ad HBO Go, Netflix, etc." None of those will be supported on a device for as long as they will be supported on a PC. Not that I'm a PC fan, but the general purpose operating system idea really shines here.
I disagree. Only a homebrew HTPC is at risk of losing those things. Any of the commercial streaming boxes/sticks from a reasonably reputable company should have support for DRM-protected content for the indefinite future. After several years a specific version of the hardware may stop receiving updates, but that's not a big deal when the original device cost $35 (assuming that you can pick up a similarly-priced replacement in the future).
To further your Chrome analogy, this particular browser update breaks some peoples' favorite sites, rearranges the UI they've grown comfortable with (but inconsistently), displays ads next to the navigation bar, includes a built-in unique advertising id, pushes sponsored extensions on users, deliberately makes it as hard as possible to change the default search engine (and periodically reverts to the default anyway), and is the last update they can choose to decline, after which all future updates will be forced on them, closing all tabs and restarting the browser whenever it feels like it.
Oh, and also, their choice to decline this particular update is being actively subverted using dark patterns to try to trick them into upgrading.
One reason I can think of, and the reason I'm staying on Win7 as long as possible - Windows 10 installs updates and reboots (killing all open windows, unsaved documents, browser tabs, etc) without asking for permission to do so. This outrages me.
I took screen shots with my phone while the upgrade ran so I can participate in the class action lawsuit that will never happen.
They're a bit blurry because I was using the outrage filter.
I suppose the settlement would be a free one-month trial of MS cloud services. And at the end of the month they'd auto upgrade you to the highest cost tier.
Mobile apps autoupdate by default, there's a difference between that and being forced to auto update. Mobile apps are also usually sandboxed heavily (more so on iOS than Android as I understand it...but I am not a mobile dev).
In my mind, auto updating was not one of the most attractive features of Chrome. In fact, auto updating for Chrome was one of the reasons I and some others I know stayed away from it for so long.
Web apps aren't binary blobs that users have to download and implicitly trust to not do nefarious things to their system. In fact, most browsers sandbox web pages to a large degree. Not to mention it's an entirely different architecture...with its own challenges.
Operating Systems run on hardware ostensibly owned by individuals (although this is constantly being diminished by corporate practices). But average Joe running Win 8 is not a security risk for MS...it is a security risk for average Joe. Backwards support is a huge cost for software companies in general...so I think the auto updating is not so much about protecting you or I, it's about protecting Microsoft's bottom line. And using nefarious tactics to force and trick people into upgrading just underscores this point. It's taking (some) control of "your" system away from you and giving it to Microsoft. Not to mention that OS's are very close to hardware (about as close to hardware as software can be).
Basically...there are tons of reasons why OS's are not mobile apps, desktops apps (like browsers), etc. and shouldn't just be auto-updated with whatever Microsoft decides to send down to your machine on a whim.
It is true that it's a security risk, but instead of forcing something that people clearly do not want, why not make something that they do want. Make a product they want to upgrade to.
Everyone is framing this like it's all for the user's benefit, but lets tell it like it is: It's a huge money-grab for MS that happens to have small benefits to users.
>It is true that it's a security risk, but instead of forcing something that people clearly do not want, why not make something that they do want. Make a product they want to upgrade to.
Two things. First, people want their computers to be secure. Most users do not seem to understand that keeping your system patched is a prerequisite to remaining secure, and those people have to be dragged kicking and screaming into running updates, which is why Windows 10 has gotten so aggressive about enforcing it. If someone gets exploited through a hole in Windows, it comes back to haunt MS; they learned that well and good in the Windows 98 days, and they learned that users will never install updates without being forced to do so in the XP days.
Second, the kind of people who are afraid to install security patches cannot be enticed by any new program or UI modification. They have an defensive dislike of computers. They just want it to stay the same forever, and even if you release new versions that are identical at the UI level, people still won't upgrade to them because they're afraid it'll "break shit", as you succinctly put it.
End users may not like MS's aggression about updates but it is actually sensible for 99% of the userbase out there. MS's problems are unique among desktop OS vendors because of the wide and varied audience that relies on them to provide a good general-purpose OS.
It's reasonable that power users would resent MS's update policies, but MS does not have the luxury of tuning their release policy for the power user. Power users should be using not-Windows.
Except that even power users like having large libraries of available software, including things that may not be available on "not-Windows".
I'm fine with aggressive defaults and sensible settings for the 99%, but I'm accustomed to being able to disable things that cause me problems or get in the way of doing what I want to do. If a system doesn't get out of my way, it's possible for it to become more of an impediment than a tool.
The problem with making it toggleable is that a lot of people are going to flip that switch by accident, no matter how deep you bury it (even if you bury it in the registry only, someone will write spyware to flip it so that the computer becomes exploitable at a future date).
Most non-game software works on WINE, or, worst case, within a virtual machine. At Windows scale, your software must permanently operate in idiot-proof mode. If you don't need that protection, I believe you should use a different OS as your primary.
Even if they had that goal, there are other ways to do it.
When OS X came along, it shipped with an entire virtual machine for its previous OS. And while some low-level things didn’t work there, a surprising number of things did work. It gave people a path forward without requiring instant-OSX-ification of things.
"keeping your system patched is a prerequisite to remaining secure" Completely false. This will 100% NEVER be a sane security model. The one true way is to have verified secure software installed in the first place. That is not impossible. It is just more expensive than releasing patches as flaws are exposed. Do not be fooled by the general flow indicating correctness.
The type that are afraid of patches are often also afraid of the network, which is quite prudent. You judge them unfairly.
"Power users should be using not-Windows." Well then...
>"keeping your system patched is a prerequisite to remaining secure" Completely false. This will 100% NEVER be a sane security model. The one true way is to have verified secure software installed in the first place. That is not impossible. It is just more expensive than releasing patches as flaws are exposed. Do not be fooled by the general flow indicating correctness.
While theoretically possible, it is not currently reasonable to employ this model for modern general-purpose operating systems. We're going to have to live with requisite patching and updates for a long time yet.
It's rare that something actually breaks, but yes, if the exploit is serious it's better to break something with a security patch than leave a known attack vector unpatched.
> Chrome was championed for auto updating, and now all browsers do the same.
And they slowly ruin the UI and introduce incompatibilities with other software that I'd like to use. It's why I'm in Pale Moon instead of Firefox or Chrome right now.
> Web apps can auto update whenever they want.
Are you saying this is a positive attribute of web apps? I suppose it is, from the development side. From the user's perspective, it's one of the things I hate the most about web apps.
> Mobile apps auto update.
Often removing features I like and adding ads or features that I don't, and chewing up my system resources while they do it. I've got auto-updates disabled for most apps on my phone, and I read the changelog before manually updating them. Exceptions: financial apps, e-mail, encryption, and other sensitive pieces of software. I'm not insane...I just don't like things changing out from under me with little-to-no warning.
I agree with this viewpoint, but it's not popular with this crowd for obvious reasons (hence your downvotes). I think there's a bit of a double edged sword here. People don't want to upgrade, but want a vendor to provide updates and protection for a long time with OS's. This is the SaaS/webapp model moving to the OS, because big version number upgrades have always sucked for all software - not just OS's.
Where else do people expect a single anonymous purchase of around a hundred dollars to last them for 5+ years with a vendor constantly making improvements to it?
It's a good upgrade, it's free, and people better get used to it. It's a good choice for most people, so unless you feel like compiling your own OS, it's gonna be stuff like this, chromeOS, or buying hardware/OS as a single package (apple).
Yes, plenty of us on here are hackers, or "special users" that might not think it's a good choice, but for the bulk of windows 7, and 8 users this is a good path and their experience will improve with it.
How long does a net revenue of <$1M (and that's just the handful that report a profit at all, at least in your linked article - many are losing millions) take to pay off a $70M stadium? The expenses listed don't appear to include repayments on the stadium.
I only feel safe using end-to-end encrypted chatrooms. Currently, niltalk can read every message. At the very least, AES encrypting messages by the chatroom's password will reduce reliance on SSL. But it really should use public key crypto for a key exchange between users. This is what's done by other disposable chatrooms:
New keypairs would be generated on the client every time you join a chatroom. Another member of the chatroom sends you the shared_key encrypted by your public key. Server knows nothing, stores no keys. Keys exchanged between users.
When you re-download the codebase on every use, there is no way to ensure integrity of the code. This is the reason cryptocat ships as a chrome extension, because it is downloaded once. Even with these issues, I'd take javascript crypto + open source over nothing (or just SSL).
> New keypairs would be generated on the client every time you join a chatroom. Another member of the chatroom sends you the shared_key encrypted by your public key. Server knows nothing, stores no keys. Keys exchanged between users.
The question is - how does the first public key exchange happen? It has be done outside of the site for it to be secure and your private key must exist locally on your device - which is contradictory to the premise of these websites.
But all forms of exchange are potentially vulnerable, the point of using multiple channels for authentication is to increase the challenge-space for potential attackers. Indeed the chief benefit of public key encryption is that the key can be exchanged over a multitude of channels and a compromise of just some of them does not jeopardize the entire operation. Perhaps we need more authentication systems where this is made implicit, with trust based on the number of different mediums the key is transferred over (or the number of different third party signers).