As long as you are doing OLTP using an RDBMS, I believe the proper way to "denormalize" is to just use materialized views and therefore sacrifice a bit of write performance in order to gain read performance. For the OLAP scenario you are ingesting data from the OLTP which is normalized therefore it's materialized views with extra steps.
If you are forced to use a document database you have to denormalise because joining is hard.
So if by scale you mean using a document database, sure, but otherwise, especially on SSDs, RDBMSs usually benefit from normalization, by having less data to read, especially if old features (by today's standards) like join elimination are implemented. Normalization also enables vertical partitioning.
There was an argument to be had about RDBMSs on HDDs because HDDs heavily favour sequential reads rather than random reads. But that was really the consequence of the RDBMS being a leaky abstraction over the hardware.
Document databases have a better scalability story but not because of denormalization. Instead it's usually because of sacrificing ACID guarantees, choosing Availability and Lower Latency over Consistency from the CAP (PACELC) theorem.
CAP has to do with distributed systems, not necessarily databases (unless they are also distributed)
Document databases/KV stores had a reputation for scalability/speed primarily because of the way they were used (key querying), and also popular ones such as MongoDB can do automatic horizontal sharding, not available in most freely available RDBMS. However you an also treat RDBMS as KV stores these days (with JSONB and simple primary key/index) if you want, and there are distributed RDBMS such as Cockroach and Yugabyte