Hacker Newsnew | past | comments | ask | show | jobs | submit | pielud's commentslogin

It lets you comment out a line without having to remove the trailing comma from the previous line. That'd be useful if JSON had comments.

I've seen people do this in the SELECT portion of SQL queries too.

Personally, I hate this.


> It lets you comment out a line without having to remove the trailing comma from the previous line

That would only be an advantage above comma at the end on the last line. It really only moves the problem from the last line to the first one. Now you can't comment out the first line without removing a comma...


In the comma-first method, adding a new element at the end produces a one mine diff. But when doing comma-last method, then adding a new element to the list gives a two line diff.

It can make resolving merges just a little bit easier.


I think it's great when doing adhoc analysis that only you are ever going to look at, though maybe I just need a better IDE. Another neat / silly trick is to add a truthy condition at the beginning of a where:

   select * 
   from foo
   where 1=1 
     AND bar = 1
     AND baz = 2
That way you can comment out any of your conditions without breaking the syntax.


If IE didn't barf on a trailing comma after the last value, we wouldn't have this problem. That's probably the biggest source of all my IE JavaScript bugs. To be fair, I'm not actually sure what the spec says, but the spec might be wrong :-)


IE9 and above should handle it fine, so it's just a matter of time before this problem goes away. In the meantime there are two things you can do: use an editor which highlights this as an error (e.g. Webstorm), or add a pre-commit hook on your code repository which disallows committing code that doesn't pass jslint checks. We use both practices on a 200 kline js codebase, and it has essentially gotten rid of javascript errors at the customer due to bad syntax (as well as those tricky = / == / === issues).


True about the comment. That's why I like the trailing comma style in pep8[1], which enables both commented out array elements and smaller, more meaningful diffs when making additions or deletions. If only trailing commas were valid JSON!

[1] https://dev.launchpad.net/PythonStyleGuide


I do it in UPDATES

    UPDATE FOO
    SET BAR = 1
    ,   BAZ = 2
    ,   QUUX = 3
I like it that way. Makes it clear the relationship between the continuing lines and their parents. Just the natural extension of

    WHERE ALICE = 1
    AND BOB =2
    AND CHARLIE = 3


That seems backwards to me. I put the operator on the previous line so I know the expression has more parts coming.

In your example, I don't know SET BAR = 1 isn't the end until I read the next line.


Different strokes I guess. I've tried both ways and I've found this one pleasant for scanning down an expression.

I mean, it's kind of the SQL analogue to the

  MyObject
  .Child
  .Grandchild
  .ItsMethod(stuff)
  .HeyThatReturnedAnotherObject()
  .MoreObject(moreStuff)


> It lets you comment out a line without having to remove the trailing comma from the previous line. That'd be useful if JSON had comments.

It also lets you remove the line (for the same reason that you can comment it out) without modifying other lines. This is somewhat convenient if the file is one that you are going to manually edit (and even more useful if it is going to be the subject of line-oriented diffing tools, since the only changed lines will be the ones with meaningful changes.)

I'd agree that its less readable, though.


it also let's you add a line at the end (most common case) without modifying the previous line.


If you use alphabetically ordered keys (a pretty good practice anyhow), that advantage goes away. If you just develop a habit of adding stuff to the front where semantics do not matter, that advantage goes away.


It also lets you find missing commas more easily.


I found it almost a necessity when I started generating larger DOM trees in JavaScript. No more games of hunt-the-missing-comma-on-the-ragged-edge: https://gist.github.com/insin/8e72bed793772d82ca8d (These syntax woes are one of the main problems React's JSX solves)


It would also make removing a line a 1-line diff instead of 2. But I still fully agree with the hate.


This looks pretty cool, but why not just run sshd on port 80/443 then use corkscrew[1] to tunnel through an http proxy?

[1] http://www.agroman.net/corkscrew/


Does corkscrew still use HTTP CONNECT? That's often blocked.


Ah yes, looks like it does use CONNECT.


For python, there's pexpect: http://pexpect.readthedocs.org/en/latest/


Something similar for Lua: http://www.tset.de/lpty/


We used to use tcl expect before discovering pexpect. Cleaner syntax, fewer false positives, nothing but good experiences with pexpect.


I used pexpect as a base technology for ShutIt:

http://ianmiell.github.io/shutit/

it really is 95% of tcl's expect functionality, much easier to debug and understand. I'm grateful it exists, and it's much under-used I think.


Nice.

I have a somewhat similar tool but it doesn't do much besides check for password expiry and do password changes.

It uses pexpect but also multiprocessing and multiprocessing.Queue. I built most of it before we started using Ansible at work, but it is still useful in those places where Ansible is clumsy.


Interesting - where is Ansible clumsy?


(just noticed this)

I was just sort of echoing your

    annoyed by the obfuscation and indirection of ...
Ansible's prime raison d'être is not remote execution but configuration management.


It was great to see ShutIt during the last Docker London!


Thanks! A previous similar talk is the first link here: https://www.youtube.com/user/ianmiell


Thanks. Good luck Shutit and 2048 scores! :)


You could probably do something like that, but it'd be a huge waste of time. The time to brew a batch is pretty much the same no matter the volume. I.E. Brewing a 5 gallon batch takes about the same amount of time as brewing 20 gallons, assuming you have the equipment capable of doing that volume.


I think huherto is suggesting beer concurrency. That is, instead of having a 20 gallon setup, having four 5 gallon setups. It will take longer because you will have to do whatever mixing,testing, etc four times but if the longest part of the process is waiting- you win in that aspect.

The question is- would this make it easier to be more consistent?


As a homebrewer I think this would be a pretty rough way to try and scale. The actual brew time would be the same, but you've increased your cleaning and maintenance significantly, you need a solution to pipe from multiple stations into fermenting vessels, you need a significant amount of extra space dedicated to brewing that could otherwise be used for fermentation vessels, etc.

I think the right answer is to get your equipment and do test batches to rework your recipes at scale. If you're successful as a brewery it's a process you'll have to do multiple times as you grow anyways, so avoiding it once seems like a silly optimization.


Thanks latj, this is what I was thinking. Big batches may be a good model for a large brewery but not necessarily for a small one that is growing organically.

I can imagine several advantages. You can replicate without having to extrapolate quantities, pressure, etc. You get to run more experiments, I can envision a supervised machine learning system that learns which parameters make the best beer. You don't throw out big batches, etc. Sure, it may require more labor, but you get other advantages.


How about a coop of home brewers- everyone agrees to brew a certain recipe of beer that month; All the beer gets blended together and redistributed. What does that taste like?

I visited a village once that did this with their wine and distilled liquor.


The killer feature for me with ddg (as a programmer) is the ! searches for documentation:

  !php strstr
  !python os.makedirs
  !pypi requests
  !js String
etc.

This gives you a single search bar for all documentation, which is amazing.

edit: formatting


That's absolutely fabulous. I'm a DDG user already, but didn't knew about this trick. A brilliant time saver.

Thanks for sharing.


Here are all of the supported ! searches:

https://duckduckgo.com/bang.html

or just search ddg for !bang


Wow I didn't know about this. +100 for DDG.


absolutely. this is why DDG is the default search on my browser. It's site-specific search for all the sites I actually care about rules into one easy command line interface. It's also replaced the desktop calculator thanks to wolfram alpha integration.

seriously, I don't know how I would function without !python, !pypi, !w, !gmap, and !hn


I already have that in my browser. And have since before DDG was launched.


This is ultimately the thing got me to switch the DDG about 2 years ago.


ironically !c# brings you to a forbidden place :p


!csharp doesn't, which is even more ironic.


As another reply has suggested, David Friedman's Machinery of Freedom is an excellent book which describes how private defense, law and dispute resolution might happen.

This video is a good introduction to the ideas presented in the book http://youtu.be/jTYkdEU_B4o


I take a similar approach. I have a dotfiles git repo in ~/dotfiles/ and have a Makefile which creates symlinks in my home directory. For example, ~/.bashrc is a symlink to ~/dotfiles/bashrc. That way, I can have a whitelist (whatever's in the Makefile) instead of a blacklist (like a .gitignore).



Any numbers with leading zeroes, i.e. 0000123, excel will interpret as the integer 123 even if you change the column format to "text". It's infuriating.

The only way I found to get around this was to open a new workbook, change the column type to "text" and then paste the data in. I believe this was excel 2010 on windows.


You can add an apostrophe before the leading zeros, and the cell will keep the zeros and still be a number type.


1. Your ORM can derive it's schema from the database's.

2. I think the main point of the post was that you can run ad-hoc SQL queries no matter how much you've denormalized. You can't necessarily do that with a NoSQL database.


Yet you are still using schemas in 2 places : not DRY , SQL just doesnt fit OOP design. Most of languages today have strong functional capabilities , that makes SQL obsolete , they have functions , event systems , RDBMS exist because people use to query those systems directly , does your users log into your database directly ? no they connect your database through middleware, that's where the job should be done.


DRY is "Don't Repeat Yourself", not "Don't Repeat Ever". If your ORM fully correctly derives everything it needs from your database schema, that doesn't count as a repeat. If your ORM needs a couple of hints, but those hints are indeed extra information that your DB can't have ("this isn't just a string, it's an IP address") that doesn't count as a repeat. If you personally typed your schema into a database creation script and then you personally also typed the same schema into your ORM specification, only then are you violating DRY.

Code generation is a powerful DRY tool, not something it suggests avoiding!

I say that independent from the question of whether you should be using an ORM at all. If you are going to use it, either your ORM should come out of the DB or be responsible for creating the DB, either is fine, as long as you're not saying the same thing twice.


> DRY is "Don't Repeat Yourself", not "Don't Repeat Ever".

It's actually "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."

(this advances your point even further)


And makes clear that DRY is just a handy catchphrase for the design principle of normalization.


> RDBMS exist because people use to query those systems directly , does your users log into your database directly ?

No, but I do. What if I want to analyze my data in some way I hadn't expected? SQL lets me do that with a single query.


do you really need schema for this?

look at web server logs, they have no schema, each line is not related with others, yet we are able to analyze logs


How do I combine data from multiple log files? How do I tweak my "queries" without scanning all of the data again? You get this kinds of things (joins, indexes, etc) for free from RDBMS. If I'm analyzing log files I have to write my own code to do it.

I'm not saying RDBMS is the best solution for everything and neither is OP. But it's appropriate when you don't necessarily know every way you want to access your data up front.


Many of the more advanced log analysis tools actually parse the logs and put them in a SQL database.


Or you could drop stupid, stupid OOP, and then you won't have a mismatch.


Even when you use OOP languages some tasks are really badly suited for OOP, so then just code it like it was imperative or functional. The relevant example here is reporting, where SQL is one of the most suited languages for this task. Use the right tool for the right job.


I don't ever use git -p directly, but I use magit (https://github.com/magit/magit) for emacs, which makes staging changes this way very easy.


Lately I find myself launching an emacs session just so I can use magit for projects I build in XCode.

It makes crafting commits a joy.


+1 emacs users should definitely try magit for this. I use the git CLI a ton but never for interactive staging at the hunk / sub-hunk level.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: