Agree. I didn't go to a single lecture after 2 months at university (electrical engineering). My TA was shit as well - was too tied up in computer vision research and found us inconvenient. However we formed a club (5 of us who actually gave a shit) to work through stuff as students. Rather awesomely our digital electronics lecturer turned up after a couple of weeks and helped us with stuff and had a weekly rant about how useless the other staff were. I think we helped him vent too.
I've been through EVERY ASP.net update on every version of .net and every MVC update from CTP2 onwards, dealt with WWF being canned and rewritten, moved APIs between old SOAP stuff (asmx), WCF and WebAPI and rewritten swathes of ASP VBnand C++ COM code, ported EF stuff to later versions and worked around piles of framework bugs including the MS11-100 fiasco. That and been left royally in the shit with silverlight.
Not one of the above has actually improved the product we produce and are all reactionary "we might get left in the shit again" changes.
Not one of the above has actually improved the product we produce
You didn't see an improvement from early versions of .NET to later versions? No improvement from WebForms to ASP.NET MVC? I have a very hard time believing that. The system is SO MUCH more mature than it used to be.
The difference between webforms and MVC was huge... (no more postbacks)
But i think Microsoft is slowly finding the correct way of doing things and yes, there is a lot of influence of NodeJS (Owin) and RoR (Scaffolders). But it's easier to change between the same platform (and still using C#), then changing between web technologies.
The above user wasn't required to switch to Silverlight, some web applications in my company still use Asp.Net 1.0 and it still works (not my projects, thank god)
Good point, I'm thinking in terms of code quality, which can be seriously improved by using newer language features. I mean, you could write C# without using generics or LINQ or extension methods or a dozen other meaningful improvements, but I can't imagine someone doing so on purpose.
You would get a null reference exception if you unrolled that into a loop, too. You should know what your data is going to look like and if any elements could be null or not. Don't blame LINQ for your lack of null checking.
Definitely but that gurantee is thrown away the moment you pull a 3rd party black box in.
I'm suggesting it's feasible but not necessarily perfect and when it does go boom, which it does when you have 100 million HTTP hits a day, you need to be able to find out precisely where it went wrong. LINQ makes that damn hard:
In my opinion, it's not really any different outside of the Microsoft bubble except outside you're dealing with different vendors. In the time that all these technology changes by Microsoft, node.js came out of nothing, PHP got namespaces and package management, Python 3 became a thing nobody uses, Rails has several high-profile security bugs, etc, etc, etc.
It's the nature of the industry; if you stop moving you'll be left behind.
Interesting how different people have vastly different experiences. I've been using .NET since pre-release 1.0. Once I ran into my first reactionary "we might get left in the shit again" change I stopped using anything that wasn't in the core framework. I don't go near ORMs, and I don't touch LINQ either. I have code running in a project now that's 7 years old, and I cannot improve it, even after revisiting it many times. I love using Visual Studio, and I love writing .NET code. I try new things when I hear about them, but nothing has compelled me to change.
This reads like a standard list of a cocooned MS developer.
With ServiceStack, NancyFX, NHibernate, Dapper, OrmLite, etc out there could you have just been making some bad choices?
And anyone who knew web, knew Silverlight didn't have a future from the very beginning.
The only bad choice was the foundation on which to build the product. The above technologies are pretty much a response to the world being better elsewhere. Unfortunately the responses are pretty immature, poorly documented with bad support reputations and virtually no backing.
I've used most of the above. ServiceStack was pretty good but nothing in comparison to Jasper which has much better documentation, a cleaner architecture, stable migration notes and better performance. Look: https://jersey.java.net/documentation/latest/index.html
NancyFX I haven't touched.
NHibernate is a buggy behemoth with a learning curve from hell. It doesn't scale well with project size (we have 2000 tables of which about 500 are NH mapped "model-first") and it stinks. The SQL generated is terrible, it's impossible to debug when it goes wrong other than sift through 50Mb of log4net DEBUG level logs. To add insult to injury, the LINQ provider is so buggy it's like programming with a hand grenade. I've used hibernate in java and NHibernate isn't even a tenth of the way to the maturity and reliability.
Dapper - I really like Dapper. I have used it for data transforms before. However it doesn't play well with low trust as it uses IL emit.
OrmLite - see ServiceStack.
Not bad choices - just choices I either regret or had to make because the whole ecosystem is amateurish outside of Microsoft and unstable inside of Microsoft. Sorry.
Have you tried LINQ to DB (not LINQ to SQL) the successor to BLToolkit? We've had good results with it, but we like to keep our queries and datalayer as simple and thin as possible.
As for the company, we've got over one million lines of c#. As new subsystems appear they are being moved to other technologies (Angular + Java + Jersey + maven + guice + Jetty + Redis + postgresql). This takes time. The amount of code is tiny compared to the old implementations and the time to market and reliability is awesome.
As for me, because I'm in charge of the migration. When its gone I'm not even going to go to the funeral.
I'm more sorry for those stuck on .NET + Windows stack.
Been there done that. Modern Java + its ecosystem blew .NET + Windows out of the water, the earth, the solar system pretty much.
PS: Language syntax is probably solving 1/10 of my day-to-day needs. That is to say while C# might have a nicer features in terms of the language, the rest (lib, jvm, tools for jvm, IDE, frameworks, portability) are way below Java.
This. Most of the code I write is gluing stuff together rather than inventing new things which is what I'm paid to do.
A lot of people say this stifles creativity and is boring but the productivity is off the scale. I've built things that would cost £50000+ in dev time and licenses and take 3 months in .net in a couple of days in Java without spending a penny (reporting, messaging, email processing, document scanning, parsing, bulk processing, ETL, matching, logging). This is because someone already did the leg work and released it with an open source license meaning no license cost. Not only that, the ecosystem, Unix based deployment platforms are all easy to automate so jobs only get done once and stay working. Sure you can throw out an ASP.Net MVC web app quickly but that's the easy bit.
We can put £500,000 a year back into open source projects through time investment and donations if we replace .Net and still do better than break even on the dev cost.
Dead or not, it's all we've got to serve some of the biggest US residential real estate corporations.
We are starting again, the mobile site for 1 of our 33,000 domains is in testing, but it will be over a year before we have most of our customers using it.
I haven't quite been working there a year yet. We're basically cannibalizing the entire company (codebase, data import methods, data storage, website and image hosting, MLS data warehousing, server and network infrastructure) to relaunch the company under the same name.
We're in the painful process of upgrading off ASP, DefaultHttpHandler code that requires 15-min recycle limits, 32bit DLLs with a GAC refresh process, 32bit IIS 6.0, MS NLBs, SQL 2000 DTS and undocumented built-in-house data-import processes, SQL 2005 main database servers, PowerEdge 2850s, and entirely non-cached nor accelerated sites.
Almost identical specs as a Lumia 520, too. 4 GB storage instead of 8 GB (both extensible via sd). Even for a low budget phone, I don't get why they don't go with at least 16 GB. 4 GB is pathetic.
It's even closer to the Nokia X. Once you've installed Google Play that's not bad, and abroad you can pick up the dual SIM version.
The Nokia Store is missing pretty basic apps so provides a selection of third party stores for you to install. Alternatively you can sideload Google Play services.
Here in Czech Republic (which is semi-eastern europe), Lumia 520 is 50 % more expensive than ZTE Open C, and 8 GB Moto G is twice the price.
Now, Lumia 520 has better camera and perhaps better screen and CPU, and Moto G is whole different matter (720p, quadcore, 1 GB RAM), so the prices make sense. The cheap Chinese devices that you can get for the same money (like Huawei Y330) have generally the same hardware, so I don't think ZTE is overpricing the ZTE Open C.
So, while I can't give you prices, it's possible that the ZTE Open C is more competitive in markets like Eastern Europe or South America.
EDIT: the reason to get ZTE Open C instead of say, Lumia 520 or Huawei Y330 would be that FxOS devices should have better browser performance for the money, at least from the impressions from the SVG comparison video, and the browser seems to be much better integrated to the system than on Android or Windows Phone.
I was unaware you could run Firefox OS on a Lumia 520. Do you have a link that describes how to do that? Or is there a vendor selling Lumia 520's with Firefox OS? I also have not seen Moto G's running Firefox OS.
EDIT: Possibly I should add some context to my question. I'm not sure why the comment is comparing a phone running a different operating system and different software. So I thought maybe these other phones could run Firefox OS and I just didn't know. I've been using Firefox OS for about a year and would not want to use a phone on a different OS. The article was about a new version of a Firefox OS phone.
At the bottom it lists the support and I see all browsers with "Not Supported" and then this line "This API is currently available on Firefox OS only for any installed applications."
Do the "principles" in your case revolve around openness, freedom, and so forth?
I've heard a lot of people use such ideals when advocating for Firefox OS, but I just don't see it all holding true in practice.
Firefox OS is one of the more restrictive environments, at least for developers. I'm basically stuck using JavaScript, HTML and CSS. If I want to use any other language, I have to try to molest it through something like Emscripten. If I want to create a native app, I'm out of luck.
At least a platform like Android gives developers a comparatively wide variety of options, from Java, to C and C++, to JavaScript/HTML5/CSS.
By limiting the freedom of developers to create apps as they see fit, then it directly impacts the freedom of end users to use such apps.
And I don't see Firefox OS as being particularly open in other respects. Maybe the code is available under an open source license, and maybe Mozilla will accept minor bug fixes from the community, but I really doubt an average user would have any ability to influence/impact/control Firefox OS beyond that. Decisions are foisted upon the users. It doesn't seem any better than Android, or iOS, or whatever other platform you want to consider.
The same goes for the "But we implement open standards!" claims. The process to come up with such standards isn't very open at all. It ends up being controlled by a small handful of major browser vendors, with minimal to no input from others. Merely being published does not make a standard "open".
All in all, it makes no sense to me to choose Firefox OS on a matter of principle. It doesn't actually meet whatever standard is being set by those principles, yet it still gives a much inferior experience to the alternatives.
There are different kinds of freedom and openness.
FxOS is made by Mozilla, which is very different in both goals and culture from Google, Microsoft and Apple. You don't need special account with OS vendor to unlock full functionality of the phone like with Android and WP, or to even use it at all as with iOS. You don't need special license (that actually costs money, IIRC) to load you own software to your own device like with iOS or WP. You can develop the software for FxOS on any operating system, unlike with iOS and WP.
This is different kind of openness than "can I use C", but to me it is more important. Personally I hate Javascript with strength of thousand suns. But FxOS seems to be worth the price.
Everyone I know has a sub $100 USB interface hanging off their machine and swears constantly at the latency and wishes they'd just stuck with ASIO on Windows...
Personally I still have a workstation...
As for the iPods they sound horrible. The latest line of nano has noticeable noise on the outputs.
You're complaining about WDM audio. There's built in WaveRT and external ASIO stacks as well. The latter has become the standard on Windows for professional audio and is direct hardware access.
USB audio devices are and aren't polled. The cheap ass speakers and stuff are but anything $20+ are buffered raw device access.
I can get 8ms out of a Windows 7 laptop with ASIO over USB that isn't even tuned. My Korg Triton Studio manages about 4ms and that's seriously high end kit.
Right, ASIO isn't Windows its 3rd-party to work around the Win7 issues.
WaveRT is what WDM uses, right? With the associated delays going user/kernel mode. How does that help?
And USB devices are polled, at three levels as I recall - 'control', 'interrupt' including 'isochronous' and 'bulk transfer'. The frame interval depends on the USB speed - and I see they are sub-millisecond. SO very adequate for audio sampling. I used to use it for serial protocols, and that frame rate was entirely inadequate for signalling so I formed my low opinion of USB at that time.
I'm terminating my Sky account over this today. At no point is this in my interest as a customer. They took £750000 to voluntarily violate my privacy.
They obviously will have to develop a heuristic for suspicion and store suspects somewhere. If that list becomes available by accident or court order then it puts people at litigation risk. Unfortunately despite guarantees supposedly mandated by law, if you're confronted with a case then you're screwed either with respect to costs or defense.
I would suggest people start moving to Andrews and Arnold or a similar smaller ISP immediately.
This must be the mildest agreement ever, and you're in outrage? Considering the alternatives, this is amazing. The only information that rights holders get is the number of letters sent out. How is this not a win for the consumer?
"Creeping normality" also known as "death by a thousand cuts" or in the context of the "boiling frog" story - the war of attrition, lobbyists generally excel at this.
Most of us are too busy to notice and/or are underestimating the negative impact of a large sum of little policies.
"They actively monitor your connection": This isn't the case is it (for this purpose at least)? It's the rights holders that contact the ISPs with a list of IPs that they claim are infringing. All the ISPs are doing is mapping IP addresses to real addresses and sending letters? The ISPs don't actively monitor for copyright infringement...
That's how I read it - all it made me do was make a mental note to remind my teenage son to use a VPN if he downloads stuff (NB which I do not approve of, but I'm not going to monitor his internet access to the point where I could block such activity).
No, that's definitely not the case. Read the section titled "How the system will work". It specifically says that the rights holders will be doing this.
This amounts to less of a privacy invasion than performed by Facebook or Google, to my mind. In addition, anyone that is under the impression their activities online are currently anonymous is mistaken. It's straightforward for rights holders to track bittorrent users to an IP, which they can then use to identify the individual.
Most of my internet traffic doesn't flow through Google's or Facebook's infrastructure (at least not to my knowledge). All of it goes through my ISP's.
On the other hand, it's not so straightforward for MPAA to identify IPs of people downloading/streaming videos directly. ISPs can, of course. Also, without the cooperation of ISPs or a shady court system (which assumes that IP in a bittorrent swarm is enough of a proof that a crime happened), it's also not very simple to identify individuals from just an IP.
Google isn't (yet) quite as good at predicting traffic though. For example, a cabby would know that if a Chelsea game's finishing at 5pm, he should avoid Stamford Bridge.
Thanks, that's a great example of where the real-time data collection is not as good as the local knowledge! Unfortunately that depends on whether the driver cares about football (or similar events in the city). I hope that waze/google/other services will start including information like that in the future.
Absolutely, and I honestly don't think we're far away from Google or someone (...but probably Google) integrating all of this data. Times for football fixtures, events at the O2 and planned demonstrations are all accessible online, it's just a case of an automated system parsing them and deciding the impact.
TFL and AA road watch know that so there is no reason Google couldn't. It's all data - after all the cabby most likely got it from the same data through a different aggregation medium (radio/paper).