Years ago there was a thread like this one, and people suggested "Drawing on the Right Side of the Brain", just like they do here. So I tried that. Here's my account; I hope it will be useful to whoever reads it now that the topic has fallen off of the front page.
* * *
I worked through all the exercises in the book, and to be honest it was elevating: in a few (two?) weeks I was able to draw still life in a way I never thought I could, and the trick of looking at objects and imagining them flattened on paper burned into my mind so much that I was catching myself randomly doing it in the middle of my day (I still do it sometimes).
Then in a month or so I signed up for actual drawing classes. On the first meeting I told my teacher about the book and showed some of my drawings. Here's how I remember the conversation that followed.
- Does that book also teach composition?
- A bit I guess, but not particularly.
- For example if you draw a sphere, where should you place it in the picture, and how should you orient the frame?
- In the middle I guess? I don't know?
He sketched a few spheres, and it turned out that portrait frame placement combined with the sphere being slightly elevated from the center ("on a pedestal") looked the best. At this point I realized that the pictures I brought had their objects scattered randomly, as I paid no attention to their placement.
- Does it also teach the laws of perspective?
- It teaches to use your eyes, so perspective is achieved automatically.
So he placed a small cube onto a table and had me draw it. It came out a bit uneven, but I thought it was OK. Then he pointed out that I did not abide by the perspective laws, and actually inverted the perspective on the top face. He corrected a few lines, and the the cube magically became much more believable. I though I used my eyes well -- I did apply all my effort -- but apparently without knowing what to look for I missed an important relation, and my cube looked like it widened toward the far end of it.
- What about shading?
- The book says to use your eyes. Maybe squint a bit to help see the values.
So he had me fill in the cube. It was obvious to me that the darkest spot on the cube was its bottom part where the shadow fell, so I shaded that part extra dark. The teacher told me to walk away for a bit, and come back and look at my drawing. Sure enough the extra dark part was the first thing that popped into me eyes when I looked at it -- but looking at the actual object it was the front edge that attracted attention the most. There was a discrepancy. And yet, in my mind the values were correct: I copied them the way I thought I saw them. The teacher explained that the biggest contrast on paper attracts the most attention -- unlike the 3d space.
Then he had me half-erase the bottom part and outer edges until they almost faded into the background, and add contrast to the front edge -- darken the shadow and brighten the light just around the edge area -- much more than there was on the object, maybe even contrary to the physics itself. I then walked away, came back, looked at the drawing again: now my attention was dead on the front edge -- just like with the 3d object. Logically I thought the tones were all wrong, but it sure looked much better.
Although the teacher didn't explain it in these words, but I gathered that he never concentrated on reproducing the exact tones he saw, but rather on recreating the feeling of looking at the object -- the tones are then all made up to implement this feeling.
- Do you know how to sharpen a pencil?
- What do you mean? Who doesn't?
- Show me.
Folks, it turned out I didn't even know that. The actual technique for drawing is to sharpen a pencil to a needle and expose 5-10mm of its graphite: this makes it easier to shade, and (more importantly) forces you to grip it far away from the tip (as you would a fencing foil, rather than a pen), which makes it possible to stand further back when drawing, and keep the whole picture in the field of view instead of the particular detail you're working on, which is then needed to implement the intended whole-image feeling, and prevents from getting lost in the details.
* * *
My conclusion, if any, is this: there is so much more to drawing than just using your eyes. There are very basic techniques, used by painters daily, that improve the quality of their work beyond that of blind copying -- at a fraction of the effort. I am very grateful to "Drawing on the Right Side of the Brain" for persuading me that I can learn to draw (anyone can), but make no mistake: it did not teach me the techniques a novice should learn.
You found yourself a great teacher! I have to say something similar after using and enjoying several resources mentioned in this thread, what sticks out to me most in memory are the lessons I learned from artists in two adult learning classes on drawing and design at local workshops. Very similar stuff to what you listed above.
For those of us that don't use Lisp (or Emacs+SLIME), and are stuck with Python/Julia/Lua/etc and Vim may I give a practical recommendation? The Vim-Slime plugin [1] is a half-decent way of making interactive development work. With a bit of tuning you can make it start a terminal emulator window inside Vim, and have it send the current paragraph of your source code into it with a press of a button.
While not as great as working with a proper Repl-oriented language (as the article explains), this is still so much more pleasant than having to re-run the whole program every time you change a function.
I also recommend jupyterlab for a similar experience. I was a hardcore vim terminal guy for 17 years, but now I won’t be going back.
The ability to interactively develop functions in almost any language inside the same environment had been revolutionary for my productivity given my short attention span.
If you're using Jupyter within the browser you can have both via
the FireNVim plugin in Firefox. It connects to NeoVim in the
background and lets you edit any multi-line text field with 100%
of your personal Vim configuration, the only limitation is that
the color scheme cannot be changed.
> With a bit of tuning you can make it start a terminal emulator window inside Vim, and have it send the current paragraph of your source code into it with a press of a button.
Defining keybindings for splitting the window, opening a
terminal buffer and copying a paragraph does not really require
a plugin. That's one of the features I like most in Vim, its
ability to just map any input to another sequence of keypresses
can replace whole scripts and plugins provided you're fine with
readability comparable to regular expressions.
Those gaps immediately catch your eye, don't they? Seeing them and thinking "what if there's something we are missing" is exactly what many particle physicists also think. This is why the existence of particles and interactions addressing those gaps have been hypothesized to death and then some. Trying to compare some of these hypothesis with the experiment is a major part of fundamental physics today, second only to measuring the things we know exist more precisely. In many cases the comparison has been performed, and when it has been the conclusion was "if there is something there, then the parameters must be such that our instruments did not measure the effects yet". This is why particle physicists are pushing for better instruments.
As a related tangent: we have verified the existence of the Higgs, but only partially.
What was so far measured was the interaction strength between Higgs and W bosons, Z bosons, and the top quarks: these are the main channels of Higgs production at the LHC, and they fit the theory prediction within 20%. Yes, 20%. Not precise at all. What about interaction between Higgs and the other 5 kinds of quarks? Electrons? Muons? Tau? Itself? The data ranges from +-100% to none at all.
In particular the measurement of Higgs-Higgs interaction is an important one: the assumption Standard Model makes about the strength of this interaction is not unique, and others are possible. We don't know yet: not enough data. Moreover models with multiple Higgs particles are possible. Again, we don't know yet, not enough data.
This is why some cautious researchers claim that we have discovered "a scalar particle", "believed to be Higgs", but further measurement is needed to determine all of its properties.
This is one of the main activities at LHC and among the theoreticians at the moment: measuring Higgs interactions with other particles. There is still way to improve precision with more data from LHC, but ultimately there is only so much it can do without increasing the energy.
I think the idea that scientific code should be judged by the
same standards as production code is a bit unfair. The point when
the code works the first time is when an industry programmer
starts to refactor it -- because he expects to use and work on
it in the future. The point when the code works the first time
is when a scientists abandons it -- because it has fulfilled its
purpose. This is why the quality is lower: lots of scientific
code is the first iteration that never got a second.
(Of course, not all scientific code is discardable, large quantities
of reusable code is reused every day; we have many frameworks,
and the code quality of those is completely different).
An interesting concern is that there often is no single piece
of code that has produced the results of a given paper.
Often it is a mixture of different (and evolving) versions of
different scripts and programs, with manual steps in between.
Often one starts the calculation with one version of the code,
identifies edge cases where it is slow or inaccurate, develops
it further while the calculations are running, does the next
step (or re-does a previous one) with the new version, possibly
modifying intermediate results manually to fit the structure of
the new code, and so on -- the process it interactive, and not
trivially repeatable.
So the set of code one has at the end is not the code the results
were obtained with: it is just the code with the latest edge case
fixed. Is it able to reproduce the parts of the results that were
obtained before it was written? One hopes so, but given that
advanced research may take months of computer time and machines
with high memory/disk/CPU/GPU/network speed requirements only
available in a given lab -- it is not at all easy to verify.
>the process it interactive, and not trivially repeatable.
The kind of interaction you're describing should be frowned upon. It requires the audience to trust the manual data edits are no different than rerunning the analysis. But the researcher should just rerun the analysis.
Also, mixing old and new results is a common problem in manually updated papers. It can be avoided by using reproducible research tools like R Markdown.
If it can't be trivially repeated, then you should publish what you have with an explanation of how you got it. Saying that "the researcher should just rerun the analysis" is not taking into account the fact that this could be very expensive and that you can learn a lot from observations that come from messy systems. Science is about more than just perfect experiments.
No, you should publish this research and be clear with how it all worked out and someone will reproduce it in their own way.
Reproducibility isn't usually about having a button to press that magically gives you the researchers' results. It's also not always a set of perfect instructions. More often it is a documentation of what happen and what was observed as the researcher's believe is important to the understanding of the research questions. Sometimes we don't know what's important to document so we try to document as much as possible. This isn't always practical and sometimes it is obviously unnecessary.
As one point of comparison, SymPy is comically slow compared to Sage. This is mostly because SymPy is purely Python; Sage on the other hand uses its own derivative of GiNaC [1], Pynac [2], for its internal symbolic expression representation, and then multiple external libraries for non-trivial operations. Symbolic transformations are mostly Maxima [3], for example. Sage literally converts expressions to strings, pipes them through a Maxima process, and then parses the result back. This is still much faster than the pure Python SymPy.
There is an effort to speed up SymPy core, SymEngine [4], but it's been in development for years now, and still isn't integrated into SymPy. Not sure why.
Case in point: 'expand("(2 + 3 * x + 4 * x * y)^60")' takes 5 seconds with SymPy; Sage (Pynac) does the same in 0.02 seconds.
LEP was annihilating electrons with positrons; LHC does not, as it only accelerates protons. FCC will have the option to do both: the plan includes two separate pipes in the same tunnel, one for protons, one for electrons.
This however does not increase the collision energy significantly: a single electron-positron pair has the (rest) energy of ~1MeV -- this is what is released during the annihilation, but the energies we want to go to at the next collider go up to 10TeV, so 10^7 times larger.
I am continuously surprised by the amount of work zig committers manage to put out every release. It is actually inspiring. How do they get to appear so productive?
Anyway, here's a nitpick. In this release they've removed varargs, and now this is how a printf call looks like in zig:
std.debug.warn("{}+{} is {}\n", .{1, 1, 1 + 1});
The ergonomics of typing .{} for every print seems pretty awful if you're a heavy user of debug printing. I was hoping that maybe some sort of a syntax sugar is planned so that tuples could be used without .{} -- emulating varargs, but the bug tracker doesn't turn up anything in that direction.
In the previous release (0.5.0) one didn't need the .{}, but instead integer literals needed explicit casts, because the vararg implementation was limited:
std.debug.warn("{}+{} is {}\n", i32(1), i32(1), i32(1 + 1));
Both of these options seem like a regression compared to the usual
printf("%d+%d is %d\n", 1, 1, 1 + 1);
... and there doesn't seem to be a way to work around it by e.g. using a function overloaded in the number of arguments (no such thing in zig) or some preprocessor tricks (again, no such thing).
PS. I'm also surprised to find out that zig seems to take 2.6 seconds to compile the above example.
To be honest typing and remembering %d %f %c %s etc. seems like more hassle than typing {} and typing it for every case.
I think the real gotchas are that { is escaped as {{ and } as }} instead of \{ and \} (though it does follow from the old %%) and you must either include , .{} or rewrite the function call to something significantly different to do unformatted print.
- Maia chess (https://maiachess.com), the human-like chess engine based on neural networks,
- several of neural networks by Dietrich Kappe (https://github.com/dkappe/leela-chess-weights/releases),
- and handicapped Stockfish (https://stockfishchess.org).
The whole thing is at https://github.com/magv/bchess, and can be installed with just 'pip install bchess'.