No. Copyright is implicit. Something can only be in the Public Domain if it's explicitly placed there (and supposedly, not all countries even recognize Public Domain, hence the creation of the WTFPL[1]).
> and supposedly, not all countries even recognize Public Domain
I'm not sure the phrasing is correct, at least for the most common example of this issue (european copyright tradition of moral rights): many european countries split the anglo-saxon copyright into estate rights (which are economic) and moral rights which are generally perpetual, inalienable, and imprescriptible.
As a result, while an author can assign (or waive) their estate rights they can not assign or waive their moral rights.
Unless there are additional special cases in the country's lawbooks an author in a moral-rights country can not put his works in the public domain (since it would require waiving rights he can't waive). Either the works can not be copyrighted (and are intrinsically in the public domain) or they will fall into the public domain when the author's rights expire.
It's common sense - in most european countries you can allow others to earn money from your work, modify it and distribute etc, but you can't allow others to lie that they made something, not you, when the truth is that you made it.
So it's not public domain, because some rights are forever yours, but you can very well allow anybody to use modify and distribute your creation.
Actually, something can only be in the Public Domain if it it's not covered by copyright, or if its copyright has expired; saying "I put this in the public domain" might have no effect whatsoever.
I don't know of any countries that don't recognise public domain (that is, implement eternal copyright).
The big example usually given is France. Under French law there are several "moral rights" like anthologizing. Here's the best way I can express this: when _why left the Ruby community and deleted his online presence, several people created collections of all of the open-source software which _why had written, without _why's permission. That sort of collection is illegal under French law; an English translation of the relevant statute says, "The author alone shall have the right to make a collection of his articles and speeches and to publish them or to authorize their publication in such form." That right is not transferable, you can't get rid of it, etc.
There is a related nasty moral right in France about withdrawing your copyrighted works -- "Notwithstanding assignment of his right of exploitation, the author shall enjoy a right to reconsider or of withdrawal, even after publication of his work, with respect to the assignee." If I am reading the other terms on this page right, that might not apply to software in particular, but it could probably apply to Creative Commons licensed writing, e.g. Wikipedia.
Even worse, French law does not permit a "mostly complete" copyright contract, as I understand the legal history -- so the French courts have actually said things like, "this contract tries to give away your moral rights, that's legally impossible, so the whole license is legal nonsense, so there never was a license, so it is totally proprietary."
You could release it under a copyright licence that is de facto the same as the public domain. It's "copyrighted to you", but anyone can do anything on it.
No. copyright is automatically applied when the work is created. You have to explicitly give more relaxed rights, aka licenses and copylefts, for people to know whether they can use it.
What is a good resource for understanding how all of this works? Books, site, tutorial, minix usenet threads... I don't care, just something to get me started.
edit:
Assuming I'm a standard, competent web dev-ish Ruby, Python programmer with little experience below those languages.
I'm actually planning on writing some tutorials based around DCPU-16 as it's a nice small instruction set with very orthogonal addressing modes (but don't worry if you don't know what that means :-)
Sort of a "Assembly Language for Python programmers" guide
I'm not really looking to learn as a means to expand my field of working expertise but more for the sake of it. My programming experience has been firmly rooted in higher level languages and I think this is a perfect opportunity to learn something new and potentially useful. I don't like the idea of having the lower levels of programming remain a black box as such
You might be interested in py4fun - it has a fairly light weight introduction to a "mythical machine", including assembler and compiler: http://openbookproject.net/py4fun/
First read K&R C and get comfortable writing C programs. Then pick up the latest edition of Patterson & Hennessy and learn the MIPS ISA. It's a very simple instruction set that avoids most of the pedagogical distractions imposed by x86 or some other more complicated architecture.
I actually have the pleasure of taking a course with Patterson this semester. He's a great lecturer and he manages to keep the (sometimes very dry) material very interesting.
Why the latest edition? We're using the 2nd edition (from 1998) in my computer architecture class this semester and it seems just fine. Even better was getting it off Half.com for <$10 after shipping.
Personally, I think that the latest edition has a lot of valuable material on parallelism and GPU programming. It's not strictly necessary for what he was asking, but its good stuff to know nonetheless.
Wow, that's a good deal. In my comp arch class, we've been forced to get the newest edition -- that is, the 4th edition, revised. Different from the 4th edition, which we used in computer organization last semester. I'm not sure how much new material has been added, but there's certainly a hefty price increase associated with buying the newest edition.
There are two books co-authored by Patterson and Hennessy. Which one do you mean? "Computer Architecture: A Quantitative Approach" or "Computer Organization and Design: the Hardware/Software Interface"?
Your best bet is a university text book on computer architecture. The course I took used a decimal based CPU (not binary) and had many other "simplifications" but its probably your best bet, at least unless you want to learn assembly level programming full stop... in which case there is some really good 32bit linux asm tutorials out there (64 bit stuff isn't as common).
This is basically part of most first year IT / CS courses, and kind of not used in day to day IT so many self taught programmers never learn it.
Looks like implementing this thing is becoming the new national passtime, at least based on the number of different implementations that have been discussed on HN in the last day.
I wonder how long till someone implements it in minecraft?
I am fairly sure notch would have one... if he is planning on running this, then I am going to imagine that he is basically going to have massive farms of GPGPUs planned for doing this emulation.
You could try to run a simulation of branch-heavy code on your hardware optimized for branchless number crunching, but you're probably better off compiling the opcodes into something that you can then use to simulate branch-heavy code on your branch-heavy-code-optimized hardware.
But I was kind of imagining that you could essentially emulate a single virtual system per "Stream processor" or whatever they are labeling the basic units. I was factoring that they could run a couple of hundred "virtual cores" per card despite the fact they weren't that optimized. But I will be the first to admit to not really knowing the details.
The other option of course is something like intel's Knights Corner architecture, which wouldn't pay such a penalty on performance for branching.
Branching used to be done by turning the memory of the cores that failed an if test to read-only and just letting all of them continue the computation.
It's gotten better now, but branching is still extremely unwieldy to do. No branch prediction either.
I suppose the question is if you would need to have multiple cores running simultaneously on the same processing element, or if the fact you have some many processing elements means you can just be inefficent and give each core the role of emulating a processor. I haven't seen anything about the core speed of the virtual cpu. however if its 5 or 10mhz, you don't really need high performance or efficency, your just need a way of craming more jobs into your servers and leaving the CPUs to run other game code.
I was trying to be somewhat polite, but... GPUs aren't magic speed juice. You know those big speed gains that get GPU advocates so pumped up? CPUs have the exact same massive speed advantages over GPUs too! That is, when have a task that the CPU is designed for and the GPU doesn't, CPUs kick GPU's ass.
There's no point in trying to jump through hoops to convince the GPU to be something it isn't. It isn't going to be faster than a CPU, or rather, a lot of CPUs.
Being 5 or 10MHz is irrelevant. Being able to simulate them faster means you need fewer servers to do it. (You can tell who actually works on clouds and who doesn't by the attitude towards performance; people who don't actually work on clouds think performance matters less in the cloud....)
I am fully aware that your average GPU isn't optimal for this task, however I was imagining that there would still be value in shifting the world load off the primary CPUs.
My line of thinking is around being able to use a single GPU stream processor to emulate this CPU in the required performance (ie 10mhz). If you could essentially do that you could have hundreds of these processors emulated for the cost of managing the IO to them.
I am not expecting it to be "Magic Speed Juice", I am actually expecting to be getting 1-5% performance from what the GPU are capable of. However I would see this as a nett advantage if it took the workload off the CPU. Something like Knights Corner could easily do this (its basically a pentium 1 core).
The point I am making is that Notch's CPU is basically a home computer CPU from the 80s. They don't require that much functionality to emulate (as if a dozen emulators in a few hours wasn't a good enough indication) and since OpenCL is turing complete you can emulate anything (see running arm linux on an 8 bit processor), the question is if its efficient enough to be viable?
Can a 1ghz stream processor emulate a 10mhz single issue simple risc core? I have no idea, but I suspect its not the part we have seen so far that will be the determining factor, I believe it will instead be the IO devices that determine the requirements.
This is so cool. Between jtauber's work and the other versions floating around I can't imagine that we won't be seeing basic compilers for higher level languages soon. So far there are emulators for the CPU, multiple assemblers, and a disassembler. Have you checked out the C version on the front page? He's been updating it like mad.
That Cell class looks a bit odd, and I can't imagine it's doing good things for your memory use. Perhaps you could simplify it by making your registers a list or a dictionary, rather than a tuple?
instead is adding ".value" all over the place where it isn't necessary. I haven't actually tried this but it looks like dropping the Cell class entirely, putting that line in as the definition of registers, then s/.values//g ought to work, or very nearly work.
PC, SP, and O are already defined as variables containing the index for that register, a fine way to do it.
The reason for the Cell() is explained in a comment; I have to be able to pass around references to registers and memory locations distinct from their value. I'm open to alternatives but the above won't work as how do you pass in "register A" or "memory location 0x1000" as the arg to an instruction method in that case?
As a offset, probably, with appropriate changes. You're probably better off channeling C design here than Python. I'm running on the assumption that while speed may not be your overriding priority, you will want this to run with some speed. I haven't examined the opcodes, but in this case even if I had to distinguish between a number and a register reference I'd probably do something like let numbers be numbers and let register references be one-element tuples containing the register offset, then switch on the type when it came time to try to use them. That is most likely going to be significantly faster than going through the very powerful class/instance machinery on Python, and should you be inclined to play with PyPy it'll probably JIT a heck of a lot better too. (Although I'd also play with having a number with a very high bit set to indicate that it's a register reference, which would probably JIT even better, since there'd be no type check.)
By splitting the difference and playing with PyPy you should be able to use Python to dodge out on a lot of the C bookkeeping BS while potentially not paying very much on the speed penalty. Using a lot of Python constructs could result in a multiple orders of magnitude slowdown for only marginal gain in this case.
I did think about passing around a (type, identifier) tuple where type = REGISTER|MEMORY|LITERAL but I was put off by writing code conditional on type. The OO programmer in me dies a little when that is done rather than polymorphism.
Match the tools to the task. Organizational schemes suitable for multi-hundred-thousand line codebases aren't always needed for something like this, which just isn't going to get that large. Old-school bitbashing can be both fast and easy enough to read. OO can cost you a lot here for not very much gain.
Or whatever. Your program, of course. (No sarcasm.)
As soon as the spec popped up, I've been into implementing DCPU-16 in Python on my own too, and I quickly encountered the same issue. Currently, my Cell-equivalent class (which I called Pointer) defines __call__, so I can do a() to read and a(data) to set.
It arguably sucks less, but I'm looking for something better and currently trying various solutions involving __getitem__, closures, function attributes, decorators and a sprinkle of metaprogramming to keep things nicely separated and much less C-ish.
I've already defined opcodes this way (decorator+func) which makes it very descriptive and almost reduces the opcode dispatcher to a one-liner.
I haven't read the spec. or followed the topic in general, but from a quick glance at it: the size of the registers isn't given, and the registers aren't memory-mapped.
Why not use an integer instead of Cell? 42 is memory address 42, -3 is register R, including for SP, PC, etc. Or addresses >= 0x10000 are registers, then you can just have
def ife(self, a, b):
m = self.mem
self.skip = m[a] != m[b]
with Python's native indexing of the mem[] list, which could be an array().
only issues is there's a third type to consider which is a literal number.
So I need to distinguish Register, Memory Location and Literal. I considered passing around a (type, value) tuple but conditional code based on type really tells the OO programmer in me that polymorphism should be used instead.
Couldn't cells be split from "normal", and used as proxies only when needed (when the cell actually needs to be passed around)? And lazily instantiated (but memoized so they can be reused)?
Does not fix the issue jtauber pointed out: you can't pass around a section of an array "by reference" so that people can set stuff in e.g. register A.
How did you decide which value of the PC to use (current instruction? next instruction? second word of current instruction?) when PC is one of the operands? The spec is not very clear on when exactly is PC incremented.
Even if you want to use the same license as notch, you must explicitly specify that, otherwise, no one can really use your code.