TL;DR: use probabilistic logging (x% chance of logging, with x chosen by dev for each log statement) instead of importance levels (ERROR, WARN, INFO, etc) to keep common events from flooding log files. Is that it?
WHY NOT BOTH? (OP here. I figured the prominent company branding was sufficient disclaimer that yes, we do have a product to sell; but we built the product because we see a better way of doing things, rather than vice versa.)
TL;DR: use probabilistic logging (x% chance of logging, with x chosen by dev for each log statement) instead of importance levels (ERROR, WARN, INFO, etc) to keep common events from flooding log files. Is that it?
Basically. But if your volume is high enough to have this problem, you're probably past the point where humans reading logs is a reasonable use of time, so you need machines to help humans consume the logs (e.g. to produce summary statistics and graphs).
So as well as probabilistic logging, annotate the logs you do emit with the probability of emission. That way the machines consuming the logs can continue to account for the logs that were dropped.
This is correct; OCaml integers are tagged with their LSB being 1 to indicate they're not a pointer. This means that the integer n is stored as 2n+1, and adding 1 requires modifying the stored value by 2.
> When using a signed integer variable to represent an index, there is an entire range of negative numbers that are not useful at all. Let's make use of them, then.
If you have "local variable" with more bits than log(# elements to be sorted), then yes, this trick works, but with sufficient extra bits, you can pack multiple indexes into a single variable by shifting and 'or'ing.
An even sneakier way of hiding this extra bit is to use the program counter to store it:
a:
int i = 0
FOR (; i < # elems-1; ++i)
IF pair(i, i+1) is out of order
swap pair; goto b
RETURN
b:
FOR (; i < # elems-1; ++i)
IF pair(i, i+1) is out of order
swap pair
GOTO a
If the program is running the "b" loop, some pair has been swapped, so when it finishes the loop, it knows to re-run the "a" loop. If the program completes the "a" loop, it knows no pair was swapped, and can finish.
That's the essence of a goto-based state machine (one of the few problems where gotos are the most natural solution), compared to the alternative of storing the state in a variable and doing an extra indirection every transition to "go to" the right one.
I guess you can move an arbitrary amount of booleans (or enums) into the running state.
I suppose that the very idea of variables is just to separate the state from the code so that it's possible to handle complex states in generic code with a reasonable size...
On the other hand, a lot of bugs are related to that the resulting implicit state machine is not complete and/or correct...
The same question but including the program state is the interresting one. How much information/state is needed and needs to be shared, as it relates to how an algorithm scales.
I kind of wish this article had gotten into some details of how various compilers implement std::function; while it was nice to see some details of clang++'s 32-byte size of all std::function to remove the need for dynamic memory for many function objects, I'm still wishing I had more details on efficiency/cost of using std::function vs. plain lambdas or function pointers or other solutions.
> Given any two cities, a and b out of N, we do not know which one is visited first in the shortest path, and we certainly don't know what a's successor will be in the solution. Even verifying a solution to the problem (Is this sequence of cities the shortest path?) cannot be done in polynomial time.
Usually, the problem is simplified to a threshold test, like: Does this tour have path length < X? These kind of problems are equivalently difficult to produce solutions for in the general case, but are straightforward to verify.
That's true; "hard to verify" isn't the good criterion either for a succinct distinction; all kinds of hard problems are easy to verify in poly time, including a modified version of this one.
The most succinct informal description I have for NP-complete problems is lack of optimal substructure; that is, given a solution to part of the problem, it's possible for that to not help at all for solving the whole problem.
For Traveling Salesman, this means that given the best tour for an N-city graph, the best tour for the same graph augmented by a single vertex could be entirely different.
And this is easy to then contrast with, say, sorting, to counter the earlier objection.
We can split the set into two, sort the two parts individually and then trivially merge the results. We cannot split the cities into two, solve the traveling salesman problem, and then easily merge the results.
And adding a new element to a sorted list is just a linear insertion.
I found the redundancy of having the same number stored on both also unnecessary, and a scheme that just reorganized where the bracelets are seems better. The "friendship" component of having identical bracelets is lost, but information density is certainly improved. Instead of considering this as one's complement, just give the right bracelet sum to one person and all the rest of the bracelets to the other.
We build software that designs and simulates Network on Chip interconnects along with the hardware to implement that interconnect in our customers' chips. We need more software people to write C++ code that generates the Verilog implementation of the interconnect and to improve the built-in performance models of our components.
See http://netspeedsystems.com/careers for some more specifics, although we're interested in more variety than just what that page indicates.
This is quite exaggerated; looking at the actual document [1], it's clear that they're claiming that in the context of the specification, "integer multiple of transmission time interval" doesn't include negative multiples, or 0 or even 1 * transmission time interval. As much as I don't like patent trolls, I see pretty clearly that the intent is 2, 3, 4, etc. times the transmission time interval.
Thanks for the comment. While I agree that the negatives and zero would not make sense, the interesting case is "1" in particular.
Although it's not in the post, the patent owner actually originally accused n=1 as being part of the infringing apparatus/method. I don't know when/why that changed, but presumably it was because they realized they wouldn't win if that was true. It is this sort of game playing that is problematic. https://twitter.com/vranieri/status/647179711563431940
"Integer" has a defined meaning. They chose that word, but it seems that they don't like the implications of that word.
There are lots and lots of ridiculous patent and patent claims occurring in the US. I'm glad that the EFF is fighting against them.
But this particular paragraph in this particular claim is by no means an abuse of patent law. The patent is about an "integer multiple" of a time period, which obviously excludes 0 and negatives, and arguably excludes 1.
If there's something else going on here, then please add it to the article. But otherwise, consider that when you stretch the truth to support an honorable cause, you are making that cause less honorable.
I don't think it's obvious that an "integer multiple" of a quantity necessarily excludes 0 or 1. I could accept that it excludes zero if from the context that would make no sense. But to presume that it excludes 1 is a bit too much.
I'm not sure if I agree with the EFF's position completely, but I see where they're coming from.
Let's say that I was implementing linear backoff. I do this by multiplying my base delay time by an iterator i, where i starts at zero and count upwards. Sleep time = delay + delay * i. So after the first failure, we sleep "delay", and after the second failure we sleep "2 * delay", and so on. If I was describing this in words, I might say that I was multiplying the delay constant by the integer multiple i. In this context, it clarifies that the number is a whole number as opposed to a fraction.
When a patent covers X, Y, and Z, (or, 1,2, and 3) and it turns out that X (=1) is unpatentable, would that make Y and Z unpatentable?
Anyway, EFF does a disservice by through flame on the issue instead of explaining it clearly. I still have no idea who is suing over over what, and why it matters what "integer means".
Suppose the word integer weren't used at all -- what would it mean if the patent had originally stated explicitly "integer greater than 1" vs "integer greater than 0" ?
Please advance the conversation forward, don't muddy it up with grandstanding rhetoric.
In a patent, what matters are the claims at the end. These are what defines what the inventor claims to have a right to exclude others from doing.
If a patent claim covers X, Y, or Z and it turns out that yes, X was known in the art, then yes, the patent will be invalid.
In plain English: Suppose someone claims to have invented doing a thing once, twice, or three times. That patent would give them the right to exclude anyone from doing that thing once, twice, or three times. But if people have been doing that thing once for years, the patent owner is attempting to exclude people from doing things that people already do. That's invalid. The court will not rewrite the claim to save it to being only 2 or 3 times--there may be other ways to do it, but the court is not that way.
It seems clear that they used the word "integer" to clarify that only whole number multiples were allowed, and not 1.5, 2.7, etc. As to whether 1 is allowed, that's not determined one way or the other by the use of the word "integer".
As to the patent owner originally accusing n=1 being part of the infringing apparatus/method, I agree that they shouldn't have done this if n=1 doesn't make sense in the context of their patent. That's more likely the mistake then the current CC.
Yes, the problem is for whether 1 is allowed or not -- do you infringe if integer includes 1?
The point of the article is to show how words--even words with very well understood meanings--are often not clear. This is a problem for someone who reads this patent. How can they be sure whether what they do is in or out side of reach of the patent claims?
The patent owner, if they intended to only claim {n>=2 |n e N}, could have easily and precisely done so. It is problematic that it is not until expensive litigation and thousands of lawyer hours will we know whether n>=1 or n>=2 or even something else or nothing at all.
The problem is that people aren't willing to pay very much for a patent application, and so the drafter doesn't really have enough time to scrutinize every single word.
It's not at all uncommon to spend 100 times as many lawyer-hours litigating a patent as you spend drafting it.
If the usual meaning of "integer" clearly doesn't make sense in this context, then we're already into the territory of coming up with a non-standard definition (unless it's just indefinite at that point). So in that case, why is {1, 2, 3, ...} OBVIOUSLY more sensible than {2, 3, ...}?
> why is {1, 2, 3, ...} OBVIOUSLY more sensible than {2, 3, ...}?
Because that's the definition the patent holder originally asserted when suing, only to suddenly change their tune when there turned out to be prior art for n=1.
People spend a lot of time making sure that all the words in a patent are exactly so. You'd think they'd have made it a tiny bit clearer on exactly which set of numbers they were including (or not including), as they patent actually says they've patented even negative multiples of a time interval when that's self-evident nonsense.
We can sorta figure out what they probably meant, but that's a really bad idea for essentially the same reason that having a compiler that decides "I really think you meant to put a semicolon there" is a bad idea.
When we all know that a sane compiler should reject all garbage input. If you don't say what you actually mean, nobody actually knows what has been patented any more than we can claim to know the results of undefined behavior in a C program for all possible systems.
And these are comparable situations because they both involve the errors inherent in trying to interpret incorrectly written statements in a formal language.
> Because that's the definition the patent holder originally asserted when suing, only to suddenly change their tune when there turned out to be prior art for n=1.
That seems not only legitimate, but outright necessary. If the n=1 case corresponds to someone else's patent, but the n>1 cases don't, then the n=1 case must be excluded, right?
> Essentially the same reason that having a compiler that decides "I really think you meant to put a semicolon there" is a bad idea.
Compilers do exactly that sort of thing, so they can continue parsing and hopefully give you more diagnostics. (Correct, useful diagnostics, needless to say.) It's called error recovery and sometimes involves inserting tokens into the parse stream.
Error recovery was hugely important in the era when programmers submitted decks of punched cards to some clerk behind a window. But even today, we are still greedy for shorter edit-compile-run cycles, no matter how short they are. If I made four syntax errors, I'd rather fix them in one go than to invoke the rebuild four times and fix one at a time.
> That seems not only legitimate, but outright necessary. If the n=1 case corresponds to someone else's patent, but the n>1 cases don't, then the n=1 case must be excluded, right?
The patentee can amend his claims while a patent application is still pending, but a court isn't going to do it for him post-issuance just because of prior art.
Yes, but they issue a list of errors that need fixing, rather than a broken binary (assuming your compiler isn't broken, anyhow).
As mentioned below, it's their responsibility to write the patent correctly. Once it has been issued, it's too late to modify it, though they often try this with creative interpretation.
Similar to what was said to Humpty Dumpty, there's a question of whether they can have it mean N=1 and N>1 at the same time.
But exclude n=1 from the patent just reduces the scope of the claims. The n=1 case turned out to be infringing on something, right?
If a patent has some parameter space, and some values of that space infringe on something, where others do not appear to, then it makes sense to exclude those values.
> They chose that word, but it seems that they don't like the implications of that word.
But, out of context, the word has implications of denoting 0, -1, -2, ... those were present and clear when they initially chose the word.
Anyway, how about that C language, redefining integers to be between INT_MIN and INT_MAX! And what to make of these "unsigned integers". That's a total oxymoron; I mean the C constant 5U is positive. Positive is a sign. So, not unsigned! :)
This was extremely convenient for me in the past.