Yes, it parses based on the from email. The short answer is that it doesn't protect against email spoofing. I do have some filtering set up, but it hasn't been effective in my testing.
However, you do receive a confirmation email after every new post. So if someone spoofed your email to make a post, you would get a notification email about that post. In the notification, you'd see what had been posted and could easily delete it. (Note that the email address associated with an account isn't made public anywhere, so someone would need to have doxxed you in this scenario.)
Of course, it would be pretty easy for a determined attacker to set up a mailbomb that flooded someone's page with hundreds of spoofed posts. At that point I would have to disable email posting for that account to stop the attack.
“ Never fix a bug and refactor in the same pull request.”
I’m sorry, but this is backwards. Bugs are many times caused by badly written code and you can tell when it’s the case. Refactoring the code many times fixes the issue without ever having to figure out where the needle was in the haystack.
I guess in every field there are platitudes and prescriptions. At the end of the day, I try to follow first principles and ignore them and just focus on building a great product.
What I see in this article is a disregard for the costs associated with context switching. My argument is, if you think you can handle the rabbit hole, and you think those related tasks will need to be done anyway at some point, head off to Wonderland. Because you have the context of the situation fresh in your temporary memory, so you’ll get it done faster than if you switch contexts and come back later.
It also misses the point that sometimes refactoring makes it _easier_ to fix the bug, and that a large part of fixing a bug is understanding exactly what's happening with the code, which refactoring can also make easier.
Overall I disagree with this article, both in it's definition of yak shaving (as being unrelated to what you're doing), and in it's assertion to never refactor and fix a bug in the same PR (now I'm not saying you should refactor things every time you fix a bug of course).
I'm going to throw an addendum on to this one: if you fix a bug, and then do a refactor, I'll agree, it's probably best to split the PR's up. But that's just the common sense of keeping changes small and logically contained. I'll also agree that it's best not to interweave the two.
Refactors usually take much longer than the bug fix, and while it acrues technical debt, there may be more urgent things to take care of. The article is about focusing on your initial goal, and then filing the refactor as a next step action item, instead of just growing scope endlessly.
I understand where you’re coming from. I would argue that if the amount of refactoring required to make the bug clear takes that much longer, then all the more reason it should be prioritized. This is really the purest definition of tech debt, because there may be other bugs present in the code you are unaware of. This is assuming no tests cover the bug, because if they did, it wouldn’t have made it into production. Honestly you should be doing it all, because you have to understand the full scope of the issue to properly fix the bug, test it to prevent regression, and in order to test it it must be testable. So I would say if there are no tests, and it is not testable code, the least amount of refactoring you should do is make it testable. You actually don’t even have to write the test if you really want to cut corners. Just through single responsibility principle, dependency injection, and writing code that could be tested is enough to bring it 90% the way there. You can even break the dependencies and theoretically as long as you don’t violate the interface the functions you refactored should hold up. The simpler and more broken down the code is, it gets to the point where you say, this function has one if statement and two return statements, writing a test is actually redundant compared to the code. If it’s not mission critical code, you can really cut corners, if you’re in a hurry…
I think it should be more like, try to separate "formatting" and other changes into separate commits instead, you know like ones that change the whitespace all around, and other layout stuff, so diffs of the actual fix are easier later.
Thanks for building this. The speed boost is pretty insane, this would actually be great for an API layer for performing multiple operations on large query results in memory to reduce requests to the database. I imagine you could add a simple condition to use ducks if the array size exceeds a certain length.
Also, kudos for snagging the ducks pypi slug. I can’t believe that was available.
Oh yeah. It was a good day when the ducks name turned up! The "plural animal" thing references pandas / polars, which are like the OLAP version of ducks. Good near-homonym too. Index? In ducks!
The API layer sounds like a great application. Anywhere there's at least 1000 or so Python objects in memory, ducks will greatly speed up finding the subset you need.
This is an interesting concept. Reminds me of changingminds.org which I am guilty for reading hundreds of his pages.
An idea I had was a way to login and keep track of what pages you’ve already read or how many times you’ve read them so you could rank your favorite articles and revisit them as a refresher.
Thank you for 1) making the font I’m actually going to try using it in VSCode simply because I think it will be fun and oddly enough easy to read, and 2) teaching me what a ligature is.
I think you are doing it correctly, you just need to break that iceberg into smaller chunks. You’re getting caught up in the ideological nonsense that surrounds TDD and Scrum. “It can categorize a single transaction correctly” should be translated to a problem statement, “I want to be able to categorize a transaction.” You must then brainstorm feasible solutions for this or different aspects of the problem, develop the spec for implementing that, in the form of tests, “able to do this, able to do that,” and build the solution. Then afterwards you can test it against some examples and ask yourself, does this solve the problem? How to improve? Come up with a change or completely different solution, rinse, repeat. I would say a general statement like categorize a transaction is difficult because you don’t have any specific examples. Come up with a list of a dozen transactions that need categorizing and your TDD experience will become much less frustrating.
I think the issue with this is that as individuals we attach our identity to our dialog and put it out into the world to share it, express ourselves, to provide value to the world, and also to collaborate and iterate and learn ourselves… with other individuals. The problem with deconstructing human ideas and information like this is that it ignores temporality. What is true today may not be true tomorrow, what is relevant today may not be relevant tomorrow. For the preservation of scientific or historic information this works great, and even allows copywriting of works. However, the true order of the day is leading and making decisions which ends up taking place in dialogue.
Dialogue is the single-threaded brute forcing and tree traversing activity for making decisions. Within a single thread, multiple subthreads do branch off, but then they become decentralized which is not what we want. We want each subthread to be conscious of it’s uniqueness to all other threads globally and for any duplication to be catalogued or ideally merged and then hyperlinked and indexed within the grand scheme. This is difficult when the most granular we can get in terms of categorization is the factual information or simply the topic itself. Places like HN and Reddit do well in allowing you to provide an entire article or website as the context and allow that to be explored, but how would you categorize this information? The real question is how do you make this visible and accessible from a single standpoint: an individual trying to learn something or make a decision. I think search engines like Google have solved this problem well simply by using the heuristic method of merit (backlinks, page rank, keywords) and so to implement this on a miniature scale for an organization would be useful, but the problem is that it would require users within the organization to forgo going to Google and instead use the provided Intranet which initially would be less useful and again may never be as useful as Google. So the solution might be to use Google as an indexing service which currently is not possible.
I think at the end of the day we are ignoring is the incentives. If people were incentivized to construct the information in this way, get some sort of reward, then it would happen naturally. But I don’t even think it is possible for a human to do that, say, store all of HN dialogues globally for example in their head and consider THAT before writing anything otherwise hyperlinking to what is already there.
The main issue is that knowledge is a graph, not a tree, and dialogue is linear. So in order to consume knowledge through text as quickly and efficiently as possible is more of a graph or traveling salesman problem which is very hard to solve, maybe something that Quantum AI can help with. One thing would be to incentivize hyperlinking. Another might be to store information in images using something like a way more advanced DALL-E that allows the human mind to grasp concepts simply through looking at trippy images or videos. To somehow use spatial and visual understanding to up the number of dimensions of what knowledge we can store and download into our brains.
fyi when i try to clear the input boxes I can't get rid of the leading zero.