But if the currency is deflationary with a coin limit, over the long term the only way more people can enter the market / make use the currency is for, as an example, $1 USD to be represented by smaller and smaller subdivisions of BTC. This necessarily increases the value of 1 BTC.
Why is there ever any incentive to do anything with BTC besides hoarding?
Don't be so simple minded. It can succeed as a remittance mechanism without succeeding as a currency. If it does nothing more than eliminate Western Union for international wire transfers it'll be massively successful.
I think by definition it is a currency if it succeeds as a remittance mechanism - because you can really only send BTC using the bitcoin network, it can't track USD values directly (as a network.) Using the Western Union network you can literally "send dollars" (or euros or pounds) but the same isn't true of bitcoin. It's a remittance network tied to a currency, and that currency is bitcoin.
In this sense its use as a remittance network implies its success as a currency.
For this reason, I had no idea this is what you meant - I thought you meant it can succeed as an investment, without succeeding as a currency.
No, IMHO it can fail as a currency and succeed as a remittance mechanism. It can fail as a stable currency and thus not be a valid unit of account while still succeeding for remittance.
The point of these benchmarks is that, given the same database and the same queries, languages have different performance profiles.
If I build an application in Java and Python, with the same database backend, running the exact same set of SQL queries, the Java application should perform better.
If you look at those benchmarks, for the same db queries on the same db backend, throughput and latency can differ by an order of magnitude. We're talking the difference between 200 or 2000 requests per second, or 50ms vs 500ms latency. That's not a rounding error.
And it's important to note with Scala, a big part of its complexity comes from the practical philosophy of its creators: purity is sacrificed in order to actually make it work on the JVM the way we want it to.
I used to work with Java on the server-side, but I'm programming almost entirely in Scala now (I'm at a small shop where I was lucky enough to convince the boss to let me give it a go on a project last year) and I've got to say that it's completely changed the way I think about and solve problems. I learned Haskell in university and I'm trying to learn more, but I don't see it ever being accepted in our office.
The Scala syntax does (especially when working with async programming / Futures) suffer from problems, like the nested callback problem that Javascript also has, but there is an elegant solution in the language... you just have to know how to use it. But on the other hand, it doesn't look like a completely foreign language to Java developers and it's not too hard to get our new hires productive with it.
How do you manage nesting from callbacks and matches and such? Inlining short functions like _ + _ is fine, but how do you organize more advanced operations?
(I'm just constantly looking for ways to make my scala code more accessible.)
I am indeed referring to for-comprehensions. Any time you have nested flatMap/maps you can replace it with a for, since that's all a for-comprehension really is...
Or, since I'm using the async Postgres module which returns Futures, to make them run in parallel you need to create the futures beforehand. I often have something like this...
val fProject = Project.findById(projectId)
val fTask = Task.findById(taskId)
for {
projectOption <- fProject
taskOption <- fTask
students <- projectOption match {
case Some(project) => Student.findByProject(projectId)
case None => Future.successful(IndexedSeq[Student]())
}
result <- (projectOption, taskOption) match {
case (Some(project), Some(task)) => {
/* do something with project, task and students */
}
case (None, _) => Future.successful(NotFound(s"Project $projectId not found"))
case (_, None) => Future.successful(NotFound(s"Task $taskId not found"))
}
}
yield result
By creating the futures first, they both run in parallel and their results are 'collected' by the for comprehension. In the first example, the computations necessarily run in sequence.
I'm using Play framework, for me, these for comprehensions are usually found in my controllers and the final result is an HTTP result.
Passing along failure can still be tricky but I find this much more organized than nesting callbacks.
And it would also only be for a specific user, rather than the dragnet surveillance (biggest fishing trip in history) that the NSA, et al, are conducting.
Why is there ever any incentive to do anything with BTC besides hoarding?