No, if the memory-mapped page you're accessing is in RAM, then you're just reading the RAM; there is no page fault and no syscall and nothing blocks.
You could say that any non-register memory access "blocks" but I feel that's needlessly confusing. Normal async code doesn't "block" in any relevant sense when it accesses the heap.
When dealing with async I think it is very relevant to think of exactly the points where control can be switched.
As such a regular memory read is blocking, in that control will not switch while you're doing the read (ie your not doing anything else while it's copying). This is unlike issuing an async read, which is exactly a point where control can switch.
edit: As an example, consider synchronous memory copy vs asynchronous DMA-based memory copy. From the point of view of your thread, the synchronous copying blocks, while with the DMA-based copying the thread can do other stuff while the copying progresses.
As the author, I don't think there's a clear definition of "blocking" in this space, other some vibes about an async task not switching back to the executor for too long, for some context-dependent definition of "too long".
It's all fuzzy and my understanding is that what one use-case considers being blocked for too long might be fine for another. For instance, a web server trying to juggle many requests might use async/await for performance and find 0.1ms of blocking too much, vs. a local app that uses async/await for its programming model might be fine with 10ms of "blocking"!
That the process/thread enters kernel mode and then is suspended waiting for IO or for some other event. As long as the thread is running your code (or, is scheduleable) it's not blocked. And then the async implementation can ensure your code cooperatively gives up the CPU for other code.
If your memory is paged out and you then access it, using your definition, it would block.
So, in the context of async code, there's no difference from the application perspective between reading mmap'ed data and reading "regular" data (ie memory from the regular paged pool), as both could incur blocking IO.
If you're lucky and the mmap'ed data is in the system cache, then reading that data will not block and is fast. If you're unlucky and your process has been swapped out, then doing a regular memory read will block and is slow.
Your definition of blocking is a bit different from my own. Synchronous is not always blocking. If the data is there, ready to go, there is no "blocking."
If you consider all memory reads to be "blocking", then everything must be "blocking". The executable code must, after all, be read by the processor. In an extreme case, the entire executable could be paged out to disk! This interpretation is not what most people mean by "blocking."
Fair point. I guess I conflate the two, because what's interesting to me, most of the time, is where does the control flow switch.
I never rely on synchronous IO being non-blocking when writing regular code (ie not embedded). As such reading from cache (non-blocking) vs disk (blocking) doesn't matter that much, as such. It's synchronous and that's all I need to reason about how it behaves.
If I need it to be non-blocking, ie playing audio from a file, then I need to ensure it via other means (pre-loading buffer in a background thread, etc etc).
edit: And if I really need it not to block, the buffer needs to reside in the non-paged pool. Otherwise it can get swapped to disk.
you don't yield to your cooperative-multitasking runtime during reading from it, which is obviously what everyone in this thread means, and it's not helpful to start telling them "you're using the word blocking wrong" apropos of nothing
> Do you have some examples where a normal memory read is async?
This hints at a way to make it work, but would need the compiler (or explicit syntax) to make it clear you want to be able to switch to another task when the page fault triggers the disk read, and return to a blocking access that resolves the read from memory after the IO part is concluded.
It could look like a memory read but would include a preparation step.