Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Learn Makefiles with the Tastiest Examples (makefiletutorial.com)
170 points by ingve on May 15, 2023 | hide | past | favorite | 95 comments


Rant incoming;

I really don’t like Make for the common use-cases it is being used for in the modern day, such as the Golang ecosystem or in many cases: creating docker files.

Make, at its core, is a tool that is meant to help you structure your dependencies for incrementally building things, but people use it as a glorified task runner simply due to the fact that it’s such a common build tool that it is nearly always installed if you have a development toolchain on your machine.

Go and docker already have incremental compilation built in, and docker doesn’t give definable artifacts so you can’t make other things depend on them either

It is a powerful tool, but the syntax is hideous and has more jagged edges than bash does, and we aren’t even using it in a way that justifies this. Makes me so frustrated for some reason.

Like everyone deciding to use a supercar to do farmwork.


Hard disagree.

    GO_SRCS=$(shell find src -name '*.go' -not -name '*_test.go)
    GO_TESTS=$(shell find src -name '*_test.go')

    .PHONY: run
    docker-run: target/image.target
        docker run example

    .PHONY: integration-test
    integration-test: target/integration-test.target

    .PHONY: test
    test: target/test.target

    target/bin: $(GO_SRCS)
        mkdir -p $(@D)
        GOOS=linux-amd64 go build -ldflags='-s -w' -o $@

    target/context.target: Dockerfile target/bin
        rm -fr $(@:.target=)
        mkdir -p $(@:.target=)
        cp Dockerfile target/bin $(@:.target=)
    
    target/image.target: target/context.target
        docker build -t example $(<:.target=)

    target/integration-test.target: target/image.target
        # TODO: test the docker image

    target/test.target: $(GO_SRCS) $(GO_TESTS)
        go test
What is your better thing?

(The answer is Bazel, but if you think Make is a "supercar".......)


Most (all?) of your Makefile targets don't represent actual files to 'make'.

You're using make to run a bunch of tasks, including where tasks may depend on other tasks having been run before.

A task-runner tool like `just` is better suited to this task. https://github.com/casey/just

`just` has some nice UX improvements over `make`. (e.g. doesn't require soft tabs, can list recipes out of the box, recipes can take command line arguments, supports .env files, can be run from any subdirectory).


> Most (all?) of your Makefile targets don't represent actual files to 'make'.

> You're using make to run a bunch of tasks, including where tasks may depend on other tasks having been run before.

And the cool thing is....it works for both!

It can represent an actual file (e.g. a compiled binary) or abstract result (e.g., a successful test).


I wish `just` had a way to capture the output of functions and act on them. Without that, it is only a slightly more straight-forward Make that's way less likely to be installed on the system plus you have to take additional steps if you want shell completion for tasks.


Why do we need to add `.PHONY` for every target ? This is one of the things I utterly despise about make.


.PHONY is how you distinguish between targets that are actual files vs targets that are virtual.

Seems like useful information to me....

It's best practice, though the only time you need it is when a virtual target happens to have the same name as an actual file.


Because make is misused

Make is a tool to make files based on dependencies

It was not created as a scripting language. It understands trees.

It is not to “make” your project. It “makes” “files”


If 'make' was called makefile, I don't think it'd be so popular. Naming is important


You don't. You can specify this in a single line.

Unless the question was, why do I have to mark targets as PHONY more generally.

In which case the answer is, you don't. You only do this when you want the recipe to always run when invoked, as opposed to the standard mode where invocation is conditional on the file being out-of-date.


It's a GNU thing :P


> What is your better thing?

A shell script?


I use it as a task runner with dependencies in python projects. In particular, it checks requirements files against the virtualenv to see if anything needs to be installed/updated, then all the tasks depend on that. If you're up-to-date the task runs immediately, if you're not it'll update for you.

> and docker doesn’t give definable artifacts so you can’t make other things depend on them either

Similar to how old projects have "make configure" to do some initial setup before "make" actually builds the project, I've done stuff on occasion where something like "make check" would pull information out of a system and create timestamped files in a scratch/ directory. Then the normal "make" would compare those files to the codebase to see if the system needed to be updated.

It is different from the first use I just mentioned, since you need two commands and it's not entirely automatic, but it's still simpler than checking each of the dependencies yourself.


I too use Make to install `node_modules` and `.venv`.

It is very convenient never having to ask oneself "Is my node_modules up to date?" after pulling or switching branches. I'll just let Make figure that out.


Make is great. But. In your specific case, you may also be interested to check out Luigi.

https://luigi.readthedocs.io/en/stable/


Yeah. And it's wildly misunderstood too, so you get random people writing and running `make clean test` stuff that inherently disagrees with itself, which can do all kinds of nonsense if your system isn't normal/clean/running on Thursday.

I do still use it for simple automation, because it's nice to have a language-agnostic way to do simple things. But once it grows beyond about a page of text it tends to become a real nightmare, and is nigh impossible for most people to help maintain... which is not at all helped by its absolute lack of clear best-practices or warnings when misconfigured, and poor meshing with many common systems (like source-modifying tools, e.g. gofmt).

It doesn't help that every guide starts out with

    # so easy!
    thing:
      ./build thing
When in reality you pretty much always need at least how this page ends, to be even slightly stable and maintainable:

    # The final build step.
    $(BUILD_DIR)/$(TARGET_EXEC): $(OBJS)
      $(CXX) $(OBJS) -o $@ $(LDFLAGS)
    
    # Build step for C source
    $(BUILD_DIR)/%.c.o: %.c
      mkdir -p $(dir $@)
      $(CC) $(CPPFLAGS) $(CFLAGS) -c $< -o $@
    
    # Build step for C++ source
    $(BUILD_DIR)/%.cpp.o: %.cpp
      mkdir -p $(dir $@)
      $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $< -o $@
It's a horrifying bait-and-switch that a lot of people never fully learn their way through to the end.


That's true of every language, yes?

It always starts with `print 'Hello world'`, but it never stays that simple.


In some ways yes. But there are absolutely differing degrees of inconsistency in a language.

Make is pretty darn far down the "deeply inconsistent and error-prone" side of things when you try to do anything correctly and reliably with it, particularly on multiple systems and across various implementations/versions. It survives because it's ubiquitous and just barely good enough.


In terms of task running specifically it does generally stay that simple for the likes of just.

The only complexity there is getting it installed on everyone's machine, but that's true of most tooling, even the common stuff given versions won't match. I solve that with Nix.


I've pointed out "reboot --halt" elsewhere as one of the common self-contradictory instructions that people like to give to computers.

* http://jdebp.uk/Softwares/nosh/guide/commands/reboot.xml

* https://www.freedesktop.org/software/systemd/man/reboot.html

I suspect that you may have just started a list. (-:


I do not rule out that one exists, but what tool looks simpler (I wanted to say ‘nicer’, but I think we should avoid ‘it looks ugly’ arguments; I think those largely are about familiarity) once you start building something from code written in multiple programming languages?

And what tool is as powerful that doesn’t have the ‘problem’ that a lot of people never learn it fully? (Why would I learn make or any other tool to the full? I browse its manual so that I know what it can do, and (hopefully) remember features exist when I need them, so that I can (often temporarily) learn them then)


Most of Make is quite reasonable. The simple stuff is clear and effective and a lot of the foundations are good (dependency order? only redo changes? great!)

It's the uncountable number of edge cases, lack of versioning/features (you can't declare what you need, you just succeed/fail/misbehave, often silently), massive massive problems dealing with entirely normal things in file paths like spaces, and more useful output than `make -d` for very common cases like "why am I rebuilding every time" / accidentally cyclic dependencies.

For starters.

Make does a lot right, but the amount of inconsistencies and friction to be reliable with it is truly absurd and unnecessary, and you need to go to extreme lengths to correctly handle them (e.g. cmake).


FYI if you care to, you can prevent `make clean test` by looking at `$(MAKECMDGOALS)`.


I have some copypasta that I stick in every makefile that has too many people abusing it blindly, yeah. It works but it's a real pain that there isn't a standard way to do it.

Which is an immediate consequence of make being built to make things, not run tasks. So it goes.


What's the problem with "make clean test"? A test target would have a test-build prerequisite anyways, so

- clean out build artefacts - rebuild test builds - run tests

sounds very much ok to me.


One obvious problem is that that isn't what `make clean test` does.

Order isn't guaranteed, and most clean targets won't have any dependency relationship with test targets, so it could test and then clean. Or interleave them (clean between test dependencies). Or run them both simultaneously if someone has set the -j flag, and then who knows what happens. It does often work out, but it depends on a lot of things.

It's a consequence of make making things, not running tasks. You've told it to make two things that you've said are completely unrelated to each other. Order doesn't matter there, so make is free to do whatever it wants.

---

Other than that, make's behavior when a target updates dependencies of another target which do not share dependencies can get extremely complicated, and often depends on execution order. Clean generally affects many/most, so it's sometimes very problematic to run with any other.

You might also have computed test dependencies at parse time based on what's on disk, which have changed unexpectedly due to clean deleting those files. That can cause `make clean test` vs `make clean` and then `make test` to behave completely differently. The latter is the only consistently safe option, and the only one where your intent will always match what make will do.


You are correct that Make is being abused because it's common. Me, I'm going to keep abusing Make vs bothering to find something supposedly better, that will then inject dependency startup problems I'd rather avoid.

Yes, abusing Make the way I and many others do is not the ideal use case, but it works just fine. It's good enough at being a task manager.


That's a fine opinion, but I haven't found a more widely available tool to describe a project "recipe", where project is written in one or more, of many possible languages

* Snakefile -> requires installation

* bash -> macOS is now ZSH. (Is bash a better choice than Make?)

* python -> what if I'm not using python?

What else is as "accessible" as Makefile?

EDIT: formatting


The issue for bash is also the issue for Makefiles.

make is not part of the default macOS install - You need to install developer tools - which I suppose if you are a developer is OK.

As for bash as it is an old verion on macOS you need to stick to a simpler subset - better write as sh.

So sh seems to be the most general.

Then consider Windows - ah ... You are going to have to install something. Shells don't run natively and how do you get snakefile or just.

So python or make are the cross platform tools you have


> * bash -> macOS is now ZSH. (Is bash a better choice than Make?)

Makefiles don't run code themselves, the recipe is written to a file and executed using "sh" (at least by default, the shell to use is changeable).


The Makefile is going to be running external commands, no? So you still have the same problem (what are those external commands written in?) - whatever the answer to that question is, your task runner could be written in as well, probably.

At least, that's how I prefer to do it. A project that's heavy JS? Use a JS task runner. Heavy python? Python task runner. Heavy shell? And so on.


I have seen few hells as bad as "just use the target runtime for build."


It can be done well though, and when it is, it removes friction from the development process.


> A project that's heavy JS? Use a JS task runner.

Trying to keep up with JS task runners for web projects is why I started using Make and haven’t looked back.


Sure but if a developer already has a JS runtime and they don't have Make installed, you've added unnecessary friction to the development process.

I hate JS as much as the next guy, but if I'm developing in JS, I use a JS task runner. Even a simple one. There's not much to "keep up with" IMO - they are quite easy to create and use, if the common/popular ones are missing features or move too fast for you.


Step 0: Ensure your project only uses one thing.


But is there not a "minimal Makefile" type of subset that you can use that appears really clean and tidy? My Makefiles are super basic and are basically task runners. Perhaps there are traps I have not fallen into but I value that I can count on them working on whatever distro I run in 5 years time.


A shell script will also work in whatever distro in whatever years time


Yeah, but Make is parallel+incremental.

And can do what a shell script can.


A distro running a Busybox userland doesn't necessarily support the same command options as one running GNU or BSD.


I always include Makefiles as a way of documenting useful (short) tasks for developers that are onboarding to projects. It gives them confidence in their ability to pick things up quickly. Whether or not they want to actually use make is up to them, but it’s one more thing they can reference.


I have long held similar opinions on make, and I've recently started using mage[0] in more and more go projects and have been happy with the result.

It's more task-oriented, the way people tend to write Makefiles with .PHONY rules, but it's all in go. It can be bootstrapped just with go too, and comes with some utilities to do make-like incremental builds if you need to.

[0]: https://magefile.org/


Couldn't you have achieved this even more simply by using make with a go shell?

Btw, from the linked page:

> Makefiles are hard to read and hard to write. Mostly because makefiles are essentially fancy bash scripts with significant white space and additional make-related syntax.

Wait what? What does bash have to do with anything? Mage may well be amazing, but it doesn't sound like this person knows make that well at all. Which makes me think they're simply trying to reinvent the wheel -- in 'go'.


> Couldn't you have achieved this even more simply by using make with a go shell?

Make is still really about file to file transformations, and `go` already wraps up all of the behavior one would normally use make for. Plus you need make + a shell + go, vs. mage where all that's needed is go.

I can't speak for the author, but I assume they're reacting to how Makefiles tend to be used in go projects and not how make works generally.


How does it compare to Just?


I looked at both and came away thinking mage was more convenient for go-only projects. Just looked good and I would probably pick it for something that wasn't go-only (if make didn't make sense instead).


Agree, though it can be handy for sort of non-standard dependency stuff. Like if you need to pull some thing with curl before a build, and you want "curl succeeded (exited zero)" to be a pre-requisite for that build. It's often less work to get a Makefile to do that than a script, or the make-like things that come with some other languages.

Maybe not the best example, but if you have a project with several weird things like that, make is often easier.


True.

I do, however, like having some of the amenities of a task runner without bringing in the "cruft" that's suited toward incremental builds. I like using Just (https://just.systems/) for those cases.


go-task is a very good task runner. Better than “just" IMO. I am now using go-task with every project. Right after creating the gitignore, I create the Taskfile.

https://github.com/go-task/task


Second time that has been mentioned today. I wish it wasn't YAML.


You were not kidding, I am not loving that syntax.


> Better than “just" IMO.

Could you elaborate on its advantages vs just?


I take the other side of this and find that it being so useful its use as a task runner just means make is really awesome. Even when “misused” it’s super useful.

That being said I really do like this article. I want to learn how to use make properly.


I agree. That's why we use the small grml tool as task/command runner in all our projects.

https://github.com/desertbit/grml


What’s your preferred alternative?


I recently used my blog as a way to learn how to use Make and Makefiles. I write posts in Markdown, and my Makefile generates HTML for each using Pandoc, then stitches it all together into one `index.html` file. The webpage behaves like a single-page-app, with a table-of-contents and everything, with just some CSS trickery.

I know there are probably frameworks and tools that do this for me already, in ways both more flexible and robust than how I've done it, but I've really enjoyed the process of getting to this point. I started writing dumb shell scripts and struggling with weird cases and messy code, then learned more about Make and improved on things by a lot. In the end, it didn't take a lot of code, yet I feel like I can be really pleased with the result.

I have come to really like Make, I have to say. It is probably used more often than it should, judging by some of the comments here. I've seen a few Makefiles where a substantial percentage of recipes are `.PHONY`, and maybe those are the cases where you should think about looking for a different task-runner? Not sure, but I enjoy Make anyway.


I've started using justfiles (https://just.systems/man/en/) for everything I would've previously used Makefiles for that aren't directly related to compiling code.

Make is a wonderful way to provide a standard set of entry points into working with repos. For instance, if you can standardize on `make setup`, `make test`, `make docker`, etc. on all of your organization's code, it's very easy for a new develop on a project to start contributing quickly. You could do that with shell scripts, but then you have to re-implement all the command line parsing that make gives you for free. And with justfiles, you can have something that looks and feels like make, but with a more pleasant syntax for non-compiling jobs.

IMO, Just is to make as zsh/fish are to bash. It's not ubiquitously available, and it's not 100% backward compatible, but if you can convince your coworkers to adopt it as your standard, it makes life that much easier for everyone.


Just does not support dependency tracking. That eliminates a huge chunk of make's usefulness right there.


That's a feature if you're not using a language that needs it. I've never, not once, ever wanted or needed it for working with Python code, for example.


> For instance, if you can standardize on `make setup`, `make test`, `make docker`, etc. on all of your organization's code, it's very easy for a new develop on a project to start contributing quickly.

This is the real magic trick. In fact you can even go one step further and standardize on just one command: `make help`. Then you walk over to your QA lead and say "Clone the repo, type `make help` and if anything about setup, deployment etc. isn't clear or doesn't work, log a bug." Now your infrastructure is subject to your regular software development lifecycle. You have achieved "Infrastructure as code" with a tool that was authored in 1976.

Unreasonably effective.


> And with justfiles, you can have something that looks and feels like make, but with a more pleasant syntax for non-compiling jobs.

Make is a part of UNIX, which ensures it is extremely stable and is available on all UNIX systems. You don't get that from justfiles. You get syntactic sugar, but all the best parts of Make are thrown out of the window.


OK, that's just not true. For starters, which make? GNU and BSD are common variants, and aren't 100% compatible. The overlap is large and useful, but does mean that you have to be aware of it if you're using multiple systems.

For my uses, make offers nothing of value over just. I don't need any of the clever build dependency resolution stuff when I'm working in Python, and cargo handles everything better when I'm using Rust. Which isn't to say that make isn't enormously powerful and useful for other people, only that its extra power causes more work than reward for the ways I personally want to use it. For me, just is everything about make that I actually use, minus all the complicating features that get in my way.


> OK, that's just not true.

If you really feel that way then you will have to ask the OpenGroup people to fix the UNIX Shell and Utilities part, because they seem to have included a specification of make.

https://pubs.opengroup.org/onlinepubs/009695299/utilities/ma...

> For starters, which make?

The interface specified in the standard.

> GNU and BSD are common variants, and aren't 100% compatible.

Irrelevant. UNIX specifies an interface. It matters nothing if some implementation added their extensions.

> For my uses, make offers nothing of value (...)

That's perfectly fine. Just install just in your machines and work on your personal projects as you see fit.

If on the other hand you intend to distribute something you work on, using make ensures that those using specific target platforms don't need to install extra software because it already ships preinstalled.


>You get syntactic sugar, but all the best parts of Make are thrown out of the window.

What does make have that just is missing? I have only dipped my toe into just, but it seems like a more-or-less superset of functionality. Plus, all sorts of niceties: built-in functions (path manipulation!), ability to declare scripting languages, self-documenting through `--list`, etc. Quite a lot of natural feature evolution if make was not forever frozen in time by the POSIX standard.


> What does make have that just is missing?

Make is a part of UNIX, so it's ubiquitous, stable, and time-proof.


By this argument there would never be any innovation


You didn't pointed out any innovation. New for the sake of new is not innovation, it's just churn.


Another poster listed a litany of problems with make


If the best thing going for make is its ubiquity I'd rather have a tool that's better designed (for non- compiling jobs).


GNU Make is shockingly versatile and unreasonably effective. I have used Makefiles to build, check, and install projects in C, C++, golang, and python. I have abused Makefiles to generate websites, Docker images, PDFs (from LaTeX), and even macro-expanded yaml. It's old and weird and I love it.


> It's old and weird and I love it.

Likewise. And it has the added benefit of parallelizing those website and LaTeX builds according to the dependency graph by just adding “-j4” to the make call.


What is there to parallelize in a latex build? Isn't it incremental?


I assume parallelizing latex builds of separate sections for their website. Like building out blogs/physics and blogs/cs from their latex equivalents in parallel.


Sometimes the LaTeX document contains many tables or figures that also need to be built. But those can be built in parallel.


I end up using Makefiles for many projects that have even trivial stages in their execution (e.g. manual/one-off ETL pipelines). I do feel like I bastardize it sometimes as a convenient command runner or brain augmentation: sure makes it easy to remember how to run something if you can just type "make" or look at the top of the file, a cool intersection between "can get it working in 5 minutes" and "will be readable in 5 months".

The Just command runner [1] is good, but is not as ubiquitous.

[1]: https://github.com/casey/just


I've got the same usage of Make and wanted to give Just a try. Which limitations did you identify with Just?


Pattern rules and not being on a zillion Linux boxes already. Maybe I just haven't seen a good idiom for pattern rules...


Nit picking.

From the article:

> Popular C/C++ alternative build systems are SCons, CMake, Bazel, and Ninja.

CMake is not a build system. It's a build system generator. A higher level abstraction over the problem of specifying projects, which greatly simplifies the work of creating and maintaining them. Once a CMake project is created, it is then used to create the actually build system. Originally it was mainly Makefile, but nowadays it also supports Ninja, Xcode, and Visual Studio. I'm sure that if there is any interest cmake could also be updated to generate other build systems.


Arguably CMake is a build system; as a user you can build your project only by running CMake commands. The fact that it will run e.g. ninja or make under-the-hood can be ignored for many use-cases.


> Arguably CMake is a build system; as a user you can build your project only by running CMake commands.

If you put together a shell script that calls some other program that runs a build, that doesn't make shell script a build system.

> The fact that it will run e.g. ninja or make under-the-hood can be ignored for many use-cases.

You can't, though. Some features are only available on projects generaded by cmake for specific build systems.


I've been using and writing Makefiles for over a decade and, ironically, my least favorite experiences with it are in the ecosystems (C and C++) it's ostensibly intended for.

When using it for Python or multi-language projects (especially ones with lots of code generation or intermediate outputs), it's a joy: it's very satisfying to be able to write dependency patterns and relations and have the machine satisfy and parallelize them.


The make manual was one of the more useful and enjoyable technical reads I've done in a while (really!).

I suspect that people going "bleh make" below have simply not read the manual.

make is incredible and super-useful. much of what is used instead of make is simply reinventing the wheel. (badly).


Make is extremely useful, but it is not simple or pretty.

I honestly think instead of teaching make, it would be easier to refine it and then teach the simpler, more elegant make.

honestly, there is SO much low-hanging fruit.

- tabs vs spaces, indentation, the mess of continuing multiline statements

- variables are inelegant, interpreted out-of-order, and = vs :=

- quoting maintains the mess of the rest of linux where you confuse a literal vs a special character

- files, directories and targets (.PHONY) are confusing

and lots more all the way down


Those are good complaints but what's far more valuable is:

- Unix standard

- documentation

- established knowledge

- existing code bases

- stable and well tested


> Unix standard

That’s enough? What about 1/2 - 1/3 of the devs on windows?


Yes? Everyone benefits from a standard specification.

Besides, most windows devs are using Ubuntu or cygwin. Or they just the ide for their environment and would never touch a CLI


Makefiles are great for what they were made for. Compiling a binary? You should consider using Make.

I get frustrated when I start seeing tests and run commands hidden behind a make task. An example I saw recently was using a makefile to run docker compose. Simple enough, but it used a non-standard docker compose file, so it required the -f flag for every invocation. Which is fine since that’s handled for you by the makefile. Then it had docker compose run commands to invoke pytests. One key feature of running tests is passing arguments to limit test runs to specific packages/files/functions- but of course passing arguments down to a command isn’t supported by Make. All very convoluted and not the best tool for the job IMO. A simple bash script with “$@“ to pass my args to pytest is a lot more straightforward and allows pytest features to be used without needing to do anything extra.


This is silly. Make was made for hierarchical tasks that can be done incrementally. This was largely about compiling things, sure. But, I'd wager all old make files have test targets. My favorite example in recentish times is http://ftp.cs.stanford.edu/pub/sgb/Makefile. And yeah, has a test target.

That docker compose and friend weren't good for building on top of is their "fault," as it were. Stability of software is not something that modern build tools really take to heart, sadly. Note that this is not "do not crash" stability.

I cede that progress is progress, such that there may be better tools today. I'd hesitate to base anything on python's ecosystem, sadly. They jumped a hell of a shark in the past few years that will be tough to get trust back on.


> but of course passing arguments down to a command isn’t supported by Make

Sure it is.

   .PHONY: test
   test:
       pytest $TEST_FILTER

   make TEST_FILTER=example test


Put this at the top of your Makefile for a node project:

  export PATH := node_modules/.bin:$(PATH)


I resisted learning Make because of the clunkiness around it (Tab-indentation? C-specific features? Gross!), but after using it a few times now I've come to appreciate it.

Most recent case was for Starbound mods. The macOS and Linux versions of Starbound don't ship with any built-in tools for uploading mods to the Steam Workshop (unlike Windows, which apparently has a dedicated GUI for it), and I didn't want to have to manually go through a bunch of SteamCMD steps every time I wanted to publish an update, so for my first real mod I put together a Makefile to automatically pack all the files together (using Starbound's included asset_packer CLI tool), populate a metadata.vdf for the Steam Workshop, and run the right steamcmd incantations to get everything up. Once that was reliably working, I condensed everything into a GitHub template repo (https://github.com/YellowApple/Starbound-Mod) and ended up with a pretty straightforward (albeit a bit clunky in spots) universal Makefile:

    MODNAME  = StarboundMod
    
    ifeq "$(shell uname)" "Linux"
    PLATFORM = linux
    endif
    ifeq "$(shell uname)" "Darwin"
    PLATFORM = macos
    endif
    
    SB_PACKER = ../../$(PLATFORM)/asset_packer
    OUT  = ./out
    
    SRC_DIRS =  achievements ai animations behaviors biomes \
       celestial cinematics codex collections \
       cursors damage dialog dungeons effects \
       humanoid interface items leveling liquids \
       monsters music names npcs objects parallax \
       particles plants player projectiles quests \
       radiomessages recipes rendering scripts sfx \
       ships sky spawntypes species stagehands stats \
       tech tenants terrain tiles tilesets treasure \
       vehicles versioning weather
    SRC_EXTS = config macros patch png wav
    
    STEAMCMD = steamcmd +login $(STEAMCMD_USER)
    STEAM_ID_PARSER = awk '/"publishedfileid"/{print $$2}' \
       metadata.vdf | sed 's/"//g'
    
    $(OUT)/pkg/contents.pak: $(SRC_DIRS) $(SRC_EXTS)
     rm -rf $(OUT)/pkg
     mkdir -p $(OUT)/pkg
     $(SB_PACKER) $(OUT)/src $(OUT)/pkg/$(MODNAME).pak
    
    $(OUT)/src:
     rm -rf $(OUT)/src
     mkdir -p $(OUT)/src
     cp _metadata $(OUT)/src/
    
    $(SRC_DIRS): FORCE $(OUT)/src
     @cp -rv $@ $(OUT)/src/$@ 2>/dev/null \
      || echo 'No $@ folder present in mod root; skipping'
    
    $(SRC_EXTS): FORCE $(OUT)/src
     @cp -v *.$@ $(OUT)/src/ 2>/dev/null \
      || echo 'No *.$@ files present in mod root; skipping'
    
    upload: $(OUT)/pkg/contents.pak
     mkdir -p $(OUT)/workshop
     cp $(OUT)/pkg/$(MODNAME).pak $(OUT)/workshop/contents.pak
     cp preview.jpg $(OUT)/preview.jpg
     sed 's,{{PWD}},$(PWD),g' <metadata.vdf.template >metadata.vdf
     $(STEAMCMD) +workshop_build_item $(PWD)/metadata.vdf +quit
     $(eval STEAM_ID=$(shell $(STEAM_ID_PARSER)))
     sed -i 's/"publishedfileid" "0"/"publishedfileid" "$(STEAM_ID)"/g' \
      metadata.vdf.template
    
    clean: FORCE
     rm -rf $(OUT)
     rm -f metadata.vdf
    
    FORCE:
    
All in all, it works well enough. Certainly nicer than trying to do it with shell scripts IMO, and I don't have to worry quite as much about whether someone else using my mod template (including my future self) is able to install some fancier build tool / task runner given how ubiquitous (GNU) Make tends to be.


My favorite glorified taskrunner is https://just.systems/, using the `justfile` to formalize random task steps I do in local development.


If you're thinking of using it as a task runner, do yourself a favour and pick something else (eg just).

Make has no end of gotchas it's a complete pain to work with.


When it said “tastiest”, I thought the example problem was going to be making a cake or something. Seems like a good tutorial regardless.


This tutorial feels like its only for C++ (-like) coders, since it focuses on "(re)compiling"

> Makefiles are used to help decide which parts of a large program need to be recompiled

I've found uses for Makefiles, where users don't have a clue what "compiling" is.

(Though other commenters may not necessarily appreciate such use cases!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: