Here is an example Pipeline which supports loading data from an unbounded source to BigQuery in batches using load jobs (evading BigQuery's Streaming Insert cost)
Are you the author? OT but I'm amazed the author charged merely 100 euro for implementing that solution for the subject startup, even if they're cash-strapped. I'm not familiar with BigQuery, but I'm curious what a normal rate for solving that issue would look like.
You can also skip pub/sub and/or use it to write files to cloud storage, then load from there by using a cloud function that will trigger the load job when a new object is created.
See: https://zero-master.github.io/posts/pub-sub-bigquery-beam/