Nim is as safe as any other language. Perhaps it's not as safe as Rust but that brings specific trade offs most people dont wan't to deal with. I don't understand why people think that Nim is "terribly unsafe" when in reality it's like any other language
With regards to memory safety, it is not. https://news.ycombinator.com/item?id=9050999 is an old comment from Patrick, but in today's Nim, it segfaults in both release and development modes for me. Rust's guaranteed memory safety means that Rust code (without explicit unsafe, the vast vast majority of code) cannot segfault.
> I don't understand why people think that Nim is "terribly unsafe" when in reality it's like any other language
For example, unless I write a bad cext, I cannot get Ruby to segfault.
None of this makes Nim a bad language. All languages have tradeoffs.
Yes Rust is more safe than Nim, I'm not arguing that. I'm also not arguing that Nim is as safe as languages with automatic memory management.
EDIT: Also, Nim is planning on turning those segfaults into runtime NilErrors and a nilChecks flag that will check for them at compile time, you can also avoid this by annotating Pointers with `not nil`
I probably should have been more clear, but I think it's safe to say that unlike C/C++, Nim can handle these types of issues like other languages that deal with pointers (Java, Go etc) with control from the programmer. The only memory safe language I know is Rust, but I am probably wrong on that part, so that's why I singled out Rust on how It's safer than Nim.
That's not at all the impression I get from reading the Nim manual. It sounds rather clear when it says unsafe over many different features. Can you declare a function pointer and point it at anything? It allows unchecked array access - what's stopping traditional overflows? I've not used Nim, but compiling to C and exposing a lot of C-like functionality seems to indicate that code will still be subject to the same types of errors. Why do you say this isn't the case? Why does the manual not mention such things? (Another example: doesn't Nim need most objects to be GC allocated to be safe? So if you're not using GC (which I imagine lots of perf sensitive code will want to avoid), what's preventing errors there?)
Maybe I've got the wrong impression and their docs are terribly misleading and there are safety checks all over. But I found the dics easy to understand last time I read them and the safety issues seemed clearly marked and more or less where'd you expect.
I never said that Nim is safe/ doesnt have safe areas in the language. But at this stage of development with Nim, it really focuses on the Language goals rather than anything else right now. I have only stated multiple times through this thread that there are way to avoid this unsafetiness and ways that will help avoid these situations in the Future with Nim (nilChecks)
It has a feel of a scripting language, but as far as I can tell, it rather has the safety of C/C++, which I personally wouldn't call "safe like any other language".
Why not? I'm interested to know because in my opinion I don't see it any less safe than languages that don't have automatic memory management and/or languages like Rust.
Because it is flatly untrue? Memory safety is rather a binary thing. C# without /unsafe is safe. Same for Java and Rust. Not true for Nim or C/C++. Rust is unique in doing this without any GC or other runtime overhead, AFAIK, which makes it a bit special.
Nim does not have a separate unsafe keyword, because all unsafe features are already characterized by keywords; that's a result of its Pascal heritage. To check whether a piece of Nim code is safe, you check for the presence or absence of these keywords; e.g., you can grep for "ptr" in Nim, while grepping for "*" in C# isn't particularly helpful. Every unsafe feature in Nim has an associated keyword/pragma. Having a special "unsafe" keyword that says, essentially, "this procedure can contain other unsafe keywords" is sort of superfluous.
Note: these unsafe features have two purposes. One is to interface with C/C++ code. The other is to be able to write close-to-the-metal code in Nim rather than in C (where you wouldn't gain any safety by using C, but lose the expressiveness of Nim). This is, for example, how Nim's GC is itself written in Nim.
None of the unsafe features are necessary for high-level programming, i.e. unless you actually want to operate that close to the metal.
That's a bit misleading and mostly due to the fact that (1) Nim hasn't reached 1.0 yet and (2) in practice these issues are relatively uncommon for the C code that Nim generates, so this hasn't been a particularly high priority.
First of all, Nim's backend does not target the C standard; it targets a number of "approved" C compilers, which makes it (1) a bit easier to avoid undefined behavior, because these C compilers know that they may be used as backends by high-level languages and provide efficient means to disable some of the undefined behavior and (2) allows Nim to emit optimizations specifically for them. For example, Nim knows that gcc understands case ranges in its switch statement and can optimize for that. See compiler/extccomp.nim for some more examples. Nim also makes some additional assumptions about how data is represented that are not defined in the C standard, but are true for all target architectures (or can be made true with the appropriate compiler configurations).
Second, regarding the specific cases of undefined behavior:
1. That shift widths aren't subject to overflow checks is an oversight; most shifts are by a constant factor, anyway, so they can be checked at compile time with no additional overhead. Nim does not do signed shifts (unless you escape to C), so they are not an issue.
2. Integer overflow is actually checked, but expensive; there's an existing pull request for the compiler to generate code that leverages the existing clang/gcc builtins to avoid the overhead, but that hasn't been merged yet; -fno-strict-overflow/-ftrapv/-fwrapv can also be used for clang/gcc to suppress the undefined behavior (depending on what you want) and one of them may be enabled by default in the absence of checks.
3. Nils are not currently being checked, but they will be. There's already a nilcheck pragma, but that isn't fully implemented and also not available through a command line option. This will be fixed. Until then, you can use gcc (where -O2 implies -fisolate-erroneous-paths-dereference) or use --passC:-fsanitize=null for clang to fix the issue.
I haven't done anything in Nim, but in C it's really easy to do bad things with pointers. You can deref a NULL, you can be sloppy about arithmetic, you can overflow a buffer, etc. Nim seems to emphasize using other features instead of pointers, but it still has them.
As stated before, there are ways to avoid them and ways that Nim will soon handle them, but do you know how many other languages deref a NULL pointer? Unlike C, this does not result in undefined behavior in means that it will execute something unsafe in Nim
Sorry for my ignorance, but the languages other than C that I've used are javascript, python, various lisps, bash, sql, prolog, etc.: no pointers! I'm interested to learn how pointers might be made safe, in Nim or anywhere else?
It really depends on how you define "safe". Nim will allow you to deref null pointers (unless you annotate it with `not nil`, then it can never be nil, this results in compile errors if it is) but if they are, it will be like Java and throw an exception if --nilChecks:On. The only language I know that makes pointers safe is Rust, with it's borrow checker and such, but that's a tradeoff I don't really want and the options Nim provides are better for my case.
As stated before in this thread, there _will_ be nil Checks in the future which will result in NilErrors or you can just annotate it with `not nil` right now and it will never be nil. You can also use -fsanitize flags with the clang backend to trap the null dereferences.
"Unlike C, this does not result in undefined behavior "
Of course it does. If you turn off expensive runtime checks, you'll get SIGSEGVs. That doesn't happen in Rust, because it is semantically impossible to dereference NULL.
Since when were we comparing Nim and Rust? yes Rust is more safe than Nim, but that comes with tradeoffs. You are obviously not reading the whole thread about me bringing up (multiple times) the fact that you can avoid these and will be even easier to avoid in the future.
when --nilChecks:On become a thing, dereferencing null pointers will be like Java, a NilError (NullPointerException in Java). This is why I said it's as safe mainstream languages that dont have AMM but languages like Rust are safer than those mainstream languages. any others to point out?
Well, this is currently highly speculative. What I proposed to Andreas was essentially a model based on Eiffel's SCOOP (with some additional influence from Erlang). Whether it's a practical design remains to be seen.
Note that shared, lockable heaps need not be heavyweight structures. It is entirely possible to imagine a shared hash table with one heap per bucket and fine-grained locking, for example. Collections for such small heaps can be fast because the number of roots is limited, and (depending on what invariants you guarantee), you can even forgo stack scanning for most collections or limit the number of stack frames that need to be traversed.
SCOOP is not a horribly complicated idea (well, other than the using preconditions as wait conditions, which has been critiqued in the past and is not a critical ingredient). It's basically an extension of the basic idea of monitors. It is based on the idea of having a unified approach for shared memory and distributed system and accomplishes that by assuming that objects can be partitioned into disjoint ("separate") data spaces, access to which is regulated to ensure mutual exclusion; this is why it translates nicely to a model involving thread-local heaps.
At the programming language level, this then mostly involves maintaining mutual exclusion (in Eiffel, the necessary semantics are attached to how "separate" types are handled) and having the optimizer get rid of unnecessary copies.
the ecosystem is there, I think you're confused... the libraries work, this is the best runtime environment for a Ruby on Rails app.. which is Ruby's killer app..
There are gems that work only with JRuby. There are gems that work with any source-compatible implementation (MRI, JRuby, Rubinius, etc.), there are gems that work with any Ruby which supports the FFI (at least, again, MRI, JRuby, and Rubinius).
There is no implementation (or implementation/platform combination -- MRI on windows supports a different set of gems than MRI on linux) that supports all gems.
That was my point - there are two ecosystems with something shared, but you always need to be aware of the differences.
I think in the future the JVM can offer better integration with native libraries. I'm not familiar with the details, I know JNI was a pain, I know there's JNA and JNR now and I just found this: http://openjdk.java.net/jeps/191
At this point it's conjectural if this implementation Ruby will get there. The headline implies that it's already the case. Hence it's a misleading headline.
It should therefore not be surprising that for carefully crafted benchmarks and with enough time and effort, a Ruby implementation can be as fast as the equivalent Java code. But history is littered with the corpses of implementations that tried and failed to meet its performance goals. (Python's Unladen Swallow being but one example.)
> The performance will be on par _before_ the final release of 9000, the final release should include the big performance boost
I think you misinterpret. It will be on par or better no later than the final (in the not-prerelease sense) release of 9000.
In the 9000 lifecycle (which starts with that final release) there are opportunities to go much further in performance. Essentially, 9000 will be ready for a "full "release when, in addition to feature/stability targets, the performance is at least no worse than the last major stable version. During its stable release lifetime, the performance may get much better (the team sees the opportunity for that, at least.)
but I stated that I do not wish to learn about how the internals of the computer works, rather than im more interested in the language itself, which is Rust. plus I saw Elixir, I didnt like it all :(
So, no language worth it's salt completely hides how the computer works from you (again, except for certain members of the FP family, or declarative languages like Prolog or SQL).
For example, Javascript has a problems when adding ".1+.2". Know why? IEEE754 limitations on double-precision floating-point arithmetic, as implemented in hardware.
Programming languages exist to program the machine, full stop. Everything else is just people faffing about on their thesis defense.
Anyways, if you didn't like Elixir--probably because you don't have the experience yet to see why it's awesome--that's okay: learn C. :)
I dont personally like Go, but from what you mean, since im taking Java, I dont need to learn C on top of that, so I can just get into Rust(and i want to be a web developer as a career). plus Rust has its frameworks like Nickel and Iron that are being actively developed on.