> It lets you comment out a line without having to remove the trailing comma from the previous line
That would only be an advantage above comma at the end on the last line. It really only moves the problem from the last line to the first one. Now you can't comment out the first line without removing a comma...
In the comma-first method, adding a new element at the end produces a one mine diff. But when doing comma-last method, then adding a new element to the list gives a two line diff.
It can make resolving merges just a little bit easier.
I think it's great when doing adhoc analysis that only you are ever going to look at, though maybe I just need a better IDE. Another neat / silly trick is to add a truthy condition at the beginning of a where:
select *
from foo
where 1=1
AND bar = 1
AND baz = 2
That way you can comment out any of your conditions without breaking the syntax.
If IE didn't barf on a trailing comma after the last value, we wouldn't have this problem. That's probably the biggest source of all my IE JavaScript bugs. To be fair, I'm not actually sure what the spec says, but the spec might be wrong :-)
IE9 and above should handle it fine, so it's just a matter of time before this problem goes away. In the meantime there are two things you can do: use an editor which highlights this as an error (e.g. Webstorm), or add a pre-commit hook on your code repository which disallows committing code that doesn't pass jslint checks. We use both practices on a 200 kline js codebase, and it has essentially gotten rid of javascript errors at the customer due to bad syntax (as well as those tricky = / == / === issues).
True about the comment. That's why I like the trailing comma style in pep8[1], which enables both commented out array elements and smaller, more meaningful diffs when making additions or deletions. If only trailing commas were valid JSON!
> It lets you comment out a line without having to remove the trailing comma from the previous line. That'd be useful if JSON had comments.
It also lets you remove the line (for the same reason that you can comment it out) without modifying other lines. This is somewhat convenient if the file is one that you are going to manually edit (and even more useful if it is going to be the subject of line-oriented diffing tools, since the only changed lines will be the ones with meaningful changes.)
If you use alphabetically ordered keys (a pretty good practice anyhow), that advantage goes away. If you just develop a habit of adding stuff to the front where semantics do not matter, that advantage goes away.
I found it almost a necessity when I started generating larger DOM trees in JavaScript. No more games of hunt-the-missing-comma-on-the-ragged-edge: https://gist.github.com/insin/8e72bed793772d82ca8d (These syntax woes are one of the main problems React's JSX solves)
I have a somewhat similar tool but it doesn't do much besides check for password expiry and do password changes.
It uses pexpect but also multiprocessing and multiprocessing.Queue. I built most of it before we started using Ansible at work, but it is still useful in those places where Ansible is clumsy.
You could probably do something like that, but it'd be a huge waste of time. The time to brew a batch is pretty much the same no matter the volume. I.E. Brewing a 5 gallon batch takes about the same amount of time as brewing 20 gallons, assuming you have the equipment capable of doing that volume.
I think huherto is suggesting beer concurrency. That is, instead of having a 20 gallon setup, having four 5 gallon setups. It will take longer because you will have to do whatever mixing,testing, etc four times but if the longest part of the process is waiting- you win in that aspect.
The question is- would this make it easier to be more consistent?
As a homebrewer I think this would be a pretty rough way to try and scale. The actual brew time would be the same, but you've increased your cleaning and maintenance significantly, you need a solution to pipe from multiple stations into fermenting vessels, you need a significant amount of extra space dedicated to brewing that could otherwise be used for fermentation vessels, etc.
I think the right answer is to get your equipment and do test batches to rework your recipes at scale. If you're successful as a brewery it's a process you'll have to do multiple times as you grow anyways, so avoiding it once seems like a silly optimization.
Thanks latj, this is what I was thinking. Big batches may be a good model for a large brewery but not necessarily for a small one that is growing organically.
I can imagine several advantages. You can replicate without having to extrapolate quantities, pressure, etc. You get to run more experiments, I can envision a supervised machine learning system that learns which parameters make the best beer. You don't throw out big batches, etc. Sure, it may require more labor, but you get other advantages.
How about a coop of home brewers- everyone agrees to brew a certain recipe of beer that month; All the beer gets blended together and redistributed. What does that taste like?
I visited a village once that did this with their wine and distilled liquor.
absolutely. this is why DDG is the default search on my browser. It's site-specific search for all the sites I actually care about rules into one easy command line interface. It's also replaced the desktop calculator thanks to wolfram alpha integration.
seriously, I don't know how I would function without !python, !pypi, !w, !gmap, and !hn
As another reply has suggested, David Friedman's Machinery of Freedom is an excellent book which describes how private defense, law and dispute resolution might happen.
I take a similar approach. I have a dotfiles git repo in ~/dotfiles/ and have a Makefile which creates symlinks in my home directory. For example, ~/.bashrc is a symlink to ~/dotfiles/bashrc. That way, I can have a whitelist (whatever's in the Makefile) instead of a blacklist (like a .gitignore).
Any numbers with leading zeroes, i.e. 0000123, excel will interpret as the integer 123 even if you change the column format to "text". It's infuriating.
The only way I found to get around this was to open a new workbook, change the column type to "text" and then paste the data in. I believe this was excel 2010 on windows.
1. Your ORM can derive it's schema from the database's.
2. I think the main point of the post was that you can run ad-hoc SQL queries no matter how much you've denormalized. You can't necessarily do that with a NoSQL database.
Yet you are still using schemas in 2 places : not DRY , SQL just doesnt fit OOP design. Most of languages today have strong functional capabilities , that makes SQL obsolete , they have functions , event systems , RDBMS exist because people use to query those systems directly , does your users log into your database directly ? no they connect your database through middleware, that's where the job should be done.
DRY is "Don't Repeat Yourself", not "Don't Repeat Ever". If your ORM fully correctly derives everything it needs from your database schema, that doesn't count as a repeat. If your ORM needs a couple of hints, but those hints are indeed extra information that your DB can't have ("this isn't just a string, it's an IP address") that doesn't count as a repeat. If you personally typed your schema into a database creation script and then you personally also typed the same schema into your ORM specification, only then are you violating DRY.
Code generation is a powerful DRY tool, not something it suggests avoiding!
I say that independent from the question of whether you should be using an ORM at all. If you are going to use it, either your ORM should come out of the DB or be responsible for creating the DB, either is fine, as long as you're not saying the same thing twice.
How do I combine data from multiple log files? How do I tweak my "queries" without scanning all of the data again? You get this kinds of things (joins, indexes, etc) for free from RDBMS. If I'm analyzing log files I have to write my own code to do it.
I'm not saying RDBMS is the best solution for everything and neither is OP. But it's appropriate when you don't necessarily know every way you want to access your data up front.
Even when you use OOP languages some tasks are really badly suited for OOP, so then just code it like it was imperative or functional. The relevant example here is reporting, where SQL is one of the most suited languages for this task. Use the right tool for the right job.
I've seen people do this in the SELECT portion of SQL queries too.
Personally, I hate this.