Hmm not sure if troll, but energy conservation is definitely not just some convention. It’s a fundamental thermodynamic law. Spooky action at a distance does not violate speed of light information propagation either. The particles have to be entangled before they’re set off in opposite directions. Only once one is observed does the other also collapse, but it doesn’t mean you can communicate faster than the speed of light because you had to prepare the information when the particles were together IIRC, it’s been a decade since I studied quantum information theory.
Here: https://phys.org/news/2015-02-space-faster.html - we are trying hard to reconcile this and categorize universe expanding as something else (e.g. not a movement that has speed because time itself is a dimension or something). But this is still up for debate, tbh.
Another example was one where we said CP symmetry was true (it was a law like a lot of things in physics) until it was violated by a weak nuclear force experiment.
And now we are holding the fort at CPT symmetry as the law.
In Physics, the evidence of anything is kinda light. A lot of reasonable extrapolations has been made. Still they are extrapolations (e.g. intelligent guess).
Even the big bang itself is just an extrapolation from the "theory" that the universe expanding.
To be fair, it is difficult to find good evidence because we can't dial back time, can't go observe things on Neptune, can't measure gravity at the subatomic scale, and etc. So, we have to work with what we can experimentally observe.
Our tools are getting better, and this is where the physics paradigm shift will come from.
You say like these are 100%. It is just a theory that we currently hold according to the little evidence that we have.
Failing to recognize that is straight up unscientific.
Apologies, I wasn’t trying to call you a troll, more that your first comment seemed to me like it might have been made in jest, clearly it wasn’t.
The articles you’ve linked to are interesting and there are clearly many scientific discoveries to be made by studying the early formation of the universe which will test our current models. However I don’t feel like convention is the right word for laws like the conservation of energy, even if there are some difficulties with tying up these theories and new experimental evidence from events at the scale of the Planck length.
Convention to me would mean something that has been accepted just because it’s always been done that way and people didn’t really bother to question why, but I don’t think that’s the case here. But we’re verging on pedantry now so no point going down that route any further.
>Convention to me would mean something that has been accepted just because it’s always been done that way and people didn’t really bother to question why.
You're wrong. The Thermodynamic laws are sort of axiomatic, meaning you really can't explain why energy is conserved. It's just experimentally shown to be maybe true, but no one knows why energy is conserved or has actually proved it to be true. It is totally "convention" as you defined it.
The caveat here is that entropy is not axiomatic. Entropy occurs as a consequence of probability. Probability is the real axiomatic assumption of the universe and entropy is a byproduct.
The thermodynamic laws were established before people fully understood the true nature of what was going on with entropy so these laws are sort of a hodge podge of axioms and derived theorems. From a temperature perspective these assumptions work so the laws still have their use. But the laws of thermodynamics aren't some elegant grouping of fundamental laws of the universe. It is a set of rules that are grouped arbitrarily.
>But we’re verging on pedantry now so no point going down that route any further.
I find this attitude rude. You called him a troll than apologized then gave your final answer and dismissed any further discussion as "pedantry." Like wow, you get the last word and shut down anything else he has to say? You were rude to assume he was in jest and you're being rude again by saying any further discussion after your final statement is pedantry.
Either way I disagree with you. It's not pedantry. This discussion is about convention and the conservation of energy. Your statement is wrong.
Thank you for writing this and introducing the word axiomatic. I was using the word convention as something that is defined and unquestionable. It's an assumption we base all of our physics equation on. Axiomatic might capture the meaning better.
This means, and I assume, we can also base all of our equations on say "energy is not conserved and always increase by 1 Joule", though the physics equations might be much more complex.
Pro-science folks are too enthusiastic about current science to the point that they become unscientific. Pointing out that these theories/laws/conventions might become invalid in the future is getting downvoted.
Newsflash: physics theories/laws/conventions are getting invalidated all the time.
>This means, and I assume, we can also base all of our equations on say "energy is not conserved and always increase by 1 Joule", though the physics equations might be much more complex.
We can but this won't match with observations. We assume energy is conserved only because our current observations show that it has been conserved thus far.
Referring to the OP article, he's basically writing that the c being the absolute speed limit is not axiomatic. It's not something we just assume to be true. It is a theorem derived from the assumption (aka axiom) that the universe cannot produce inconsistent events.
Yeah sure. We initially wanted to build something to compare the equivalence of two programs written in different languages and for this we started out using KLEE. Unfortunately, we found it didn't quite satisfy our needs and then changes we wanted to make didn't really fit with their architectural model. We also felt like it wasn't as amenable to parallelisation as we would have liked. The core technology is using an SMT solver just like KLEE, but our hope is that Symbolica will be more scalable and easier to use for larger problems.
DeepState looks interesting, we had seen this before and other stuff from trail of bits, but admittedly we haven't used it ourselves. Our impression is that it seems like more of a frontend to various symbolic execution / fuzzing backends. So maybe it's something we should consider integrating with too.
We hadn't come across ForAllSecure before, so thanks for pointing them out. They do appear to be similar, but their focus seems to be on large commercial/enterprise projects in sectors like defence and aerospace. So maybe I'm wrong, but I don't think they have an offering for individuals / smaller teams.
On the logo point, as another commenter pointed out it's the mathematical forall symbol, so I guess we just both had the same thought when it came to coming up with a logo.
Great question. This obviously isn't exposed through the playground right now as that's just a single file demo.
Our backend software does integrate with larger build systems. So the way you would handle third party deps is to link them during the build and then analyse the whole built module. We plan to make it easier to integrate with toolchains for specific languages in the future.
Hey, the reason for the long times on the playground is actually that we’re having to wait for build servers to boot up and then build the code before we can finally ship it off to our backend for analysis. We’ve had quite a bit of feedback on this slowness and we hope to eliminate some of this constant factor boot up and compile time soon, and for what remains make it clear that this is what’s happening in the status box.
So although it might seem like it, we’re not actually doing any looping, we’re doing constraint solving as you’ve mentioned.
I’ve not used Hypothesis, but says it’s inspired by QuickCheck and I’m a regular user of FsCheck which is the F# equivalent. Those tools have a similar goal in mind that we do, but whereas they generate randomised test data (I think FsCheck creates 100 test cases IIRC), we effectively solve the equation so we can actually say that it holds for all inputs. The key difference is that we don’t actually invoke the code with concrete values, but rather treat it like a set of constraints that should always be satisfied. We then check these constraints and if they fail we can reverse engineer a concrete value that would cause the violation. So we effectively run it for all inputs, without actually running it for any inputs.
On the monetisation point we do plan to have a paid version of the product too. We think that one of the limiting factors of using symbolic execution to date has been in the path explosion problem when analysing large programs. A large part of our reasoning behind building a new symbolic executor from scratch was that we wanted to be able to parallelise it to help overcome this issue. We intend to provide a cloud hosted version of Symbolica that makes use of this so that users can get access to this compute power on a pay per use basis.
So our rough plans at the moment are to charge on a consumption basis to use this service.
> The key difference is that we don’t actually invoke the code with concrete values, but rather treat it like a set of constraints that should always be satisfied
How does that work with a dynamic language such as Python?
The static type information isn't actually too important to us, what our tool cares about is being able to interpret branching conditions in the code in order to turn them into constraints. We currently do this at the LLVM IR level, so as long as we can compile the code down into this form then we can analyse it. With Python a lot of the work we had to do in order to get a prototype working was in being able to handle all of the system calls that the Python interpreter needs to make when booting up. These aren't always straight forward to treat symbolically because they interact with the file system etc, but it's possible to provide mock implementations that don't affect the analysis of the users code.
Hey, thanks for the feedback. A more comprehensive list of features is on our todo list, we’ll update the site shortly.
The memory violation one doesn’t include symbolica.h because we don’t need to symbolize any variables for that one. When we run the compiled code through our solver it adds additional constraints such as, “don’t access memory that’s out of bounds”, so we can detect that automatically.
On your final point, we’re working on the LLVM bitcode so we do have to compile the code, hence the print statement. You’re right in that this is similar to other symbolic executors. We built this one as we found others were quite difficult to get started with and were hard to extend with the features we personally wanted as well as not supporting the languages we wanted either. Obviously at this point in time we’ve not achieved these goals yet, but we think we’ve built the foundations to be able to do so.
Thank you for the answer and a big kudos for the project.
Regarding the memory violation, could symbolica deal with symbolic memory? Can it deal with symbolic files as input? How about syscalls with symbolic inputs? These are the main problems I had when I worked on my toy symbolic execution engine. If yes, you should definitely market these features.
Yeah it can deal with symbolic memory. As long as the memory allocation is fixed size the contents can be fully or partially symbolic. We also support symbolic addresses for allocations to ensure that pointer arithmetic is fully tested.
We're currently simulating a lot of the underlying system at the C std library level. For a number of reasons we'd like to lower this to the raw syscall and assembly level. This would allow any lib C implementation to be tested along with the application code, and we may even be able to simulate threading and the file system. Syscalls could be made symbolic along with files by treating the entire system symbolically, but obviously this is a lot of work so it's something that we're gradually building towards.
Hmm, yeah sorry about the FireFox thing and thanks for reporting. I just checked it now and it seems like the code editor div isn't respecting the height 100% style so it's shrunk to being a few pxs heigh. I'll fix today.
We do have some upcoming changes that will make the status clearer on the website and will be removing some constant time overhead that we're paying on each playground run. I agree that the playground is on the slow side at the moment whilst we're still in the early stages of developing this product.
On the Lisp point, it's not really been developed for commercial reasons, as like you say it's not got the largest market share, but we wanted to see if we could get it to work with languages other than C and Lisp seemed like a nice one to try.
I should add that we believe it’s possible to support other langs too, especially if they compile down to LLVM IR, or have an interpreter/runtime that does. So we plan to eventually work on support for others too like C++, .NET, Java, JavaScript and Rust.