Hacker Newsnew | past | comments | ask | show | jobs | submit | more optionalsquid's commentslogin

Did you use an existing package or did you write something from scratch? I'm also looking at rewriting my CV in Typst, though the fact that I am happy with my current job means that it is not a very high priority task


I used existing templates from the [typst universe repository](https://typst.app/universe/) for the resume then built something much simpler from scratch for a secondary document, a sort of cover letter/case study of projects I've done.

If there's interest I can maybe take some PII out of my repo and make it public. Not like there's anything wildly private in there, would just prefer to not get any more spam calls than I already do.


You don't have to go through the effort for my sake. I was mainly interested in hearing if you had recommendations with regards to existing packages, though there's a decent chance that I'll just end up creating something from scratch


I don't know what their official policy is on breaking changes, but packages published via Typst Universe[1] are versioned, and you specify the exact version you want when importing a package in your document. So while you may need to install an older compiler (which is a single, self-contained executable), I don't think that you'll have to worry about your dependencies

[1] https://typst.app/universe/


In addition to what the other commentors write, an advantage of Typst is that it is self-contained:

You just need one (large) executable to do everything, whereas with PanDoc you (by default) need to have LaTeX installed if you want to generate PDFs


There's usually some confusion about this, so to clarify in advance:

- The Typst online editor is proprietary: https://typst.app

- The Typst compiler/CLI is open source: https://github.com/typst/typst

I hear that the online editor is quite good, but personally I've only ever used the CLI.

I originally picked up Typst as yet another replacement for PowerPoint (replacing my use of Marp), but have since used it for a poster and some minor text documents. And I've been very happy the results. I know that a lot of people love using LaTeX for that kind of thing, and with good reasons, but I always forgot most of the details between my (occasional) use of LaTeX, while I've found Typst to be very easy to return to


I used LaTeX for decades and had convinced myself nothing could ever replace it. Just this month, however, I converted to Typst for a large project. Absolutely no regrets: undying respect to the great Knuth, but the experience with Typst is already simply better on almost every axis. I use TinyMist with vscode and the development experience is terrific. I was modifying templates within a day of picking it up, which—-skill issue undoubtedly—-always gave me nightmares in LaTeX.


100% agree. With tex it feels like when you use a package or template, you're stuck with every choice it made because changing it yourself is just too daunting. With Typst I feel confident that I can go in and muck with whatever I don't like. It's a really refreshing feeling.


Although this is objectively an advantage, it means I've spent just as much of my time mucking about with customisation in Typst, as I have in LaTeX :p


The two applications were developed on quite different computers and with quite different toolchains.

Interestingly, Knuth has stated that his development of Literate Programming:

http://literateprogramming.com/

was more important than TeX --- fortunately, his publishing _TeX: The Program_:

https://www.goodreads.com/book/show/499934.Computers_Typeset...

has been very helpful to folks developing successors and add-ons and new versions, facilitating the creation of web2c and change files which made tools such as pdftex and omega and xetex and luatex possible).


The first link for Literate Programming gives me a certificate warning and an off putting site for some likely malware or something? This appears to be a working link: https://www.cs.tufts.edu/~nr/cs257/archive/literate-programm...


famously knuth was trying to (and pretty much did) solve digital typesetting not create a nice piece of hci so this is all as it should be or at least as might be expected


Staying in his lane, living his best life—dropping incredible things to humanity ever now and then. I had to check since I hadn’t thought about it for… a decade apparently, but looks like TAOCP 4B came out a couple years ago.


Speaking of skill, learning a new language is always daunting, but I found that LLMs do a pretty good job of generating Typst code. I relied on that a lot for generating snippets of code, and learning about the language, which would've taken me more time otherwise. Although the Typst docs are pretty good, regardless.


What LLMs? In my experience they do a terrible job with Typst. Very frequently ChatGPT and Gemini will produce code that doesnt work. Im not sure if it's using an older syntax or just hallucinating. Additionally, it's rarely able to fix it after I provide the error and even copy-past docs.

Maybe I was just unlucky or you had better luck with another model. But I was very surprised to here this because Typst is my chief example for a language that LLMs are bad at.


Here is my experience.

Claude did generated a rather good template for what I needed. It did not compile at first but I copy-pasted the errors and it fixed them.

Not all was good, though. It used literal bullets instead of `-` required for lists, but on whole the experience was positive.

It had taken me less time to fix the template than it would been taken to write it from scratch.

Something which Claude was good at. I throw him a crude ASCII "art" representation of what I want and get the right Typst code back.


This was a few months ago, but mainly Claude Sonnet 3.5 IIRC.

You can't escape hallucinations, of course, but they can be mitigated somewhat. Don't ask it to generate a bunch of code at once. Feed it a snippet of code, and tell it precisely what you want it to do. These are general LLM rules applicable to any language and project, and I've had success with them with Typst. I also used it just to get explanations, or general ideas about how to do something, and then figure out the actual syntax and working code myself. It's strange that you haven't had luck with pasting docs. That has usually worked well for me.

I also think that LLMs don't struggle as much with Typst because the language is relatively simple, and there is little bad and outdated content about it online, so they weren't trained on it. I assume that the API probably hasn't changed as much either and there haven't been many compatibility issues, so it's not as important which version the LLM was trained on.


It failed miserably with very simple requests. It was wrong but it wasn't clear if it was hallucinating or simply using old syntax.

I did not try Claude though. Perhaps that performs better.

edit: I just prompted chatGPT with this and pasted the result into the Typst editor. 20 compile errors.

>Make a Typst doc with two columns per page and US page dimensions

Same prompt but for Latex and pasted in overleaf. No problems.


I just tried the same prompt in chatGPT and it gave 10 errors. Mostly they were because it was using `#` as a comment character, which suggests that it has not been given very much typst code to examine.


I just tried the same prompt with Claude Sonnet 4.5. (Using a generic "programming" system prompt. It's a bit long, so I won't paste it here, but I can share it if you're interested.)

It generated this:

------------------

  #set page(
    paper: "us-letter",
    columns: 2,
  )

  #lorem(100)
This creates a Typst document with: - US Letter page size (8.5" × 11") - 2-column layout - Sample text (replace `#lorem(100)` with your actual content)

------------------

Which compiled to PDF without errors using Typst CLI v0.13.1. The two column layout is not visible with `lorem(100)`, but `lorem(1000)` shows it clearly.

I'm surprised you got 20 compile errors for what should be a very simple document.

So I would call this a success. I doubt Claude is that much better than ChatGPT (GPT-5?).

I'm not saying that this invalidates your experience, but I would suggest experimenting with different strategies and LLMs. Try some prompt variations, feeding it more (or less) context, higher-quality context, documentation, etc. These tools are highly sensitive and unreliable, but can be coerced into producing helpful results with some care and patience.

EDIT: I was curious, so I tried the same with ChatGPT-4o using OpenRouter's default system prompt. It produced this:

------------------

  // Set US letter page size
  #set page(width: 8.5in, height: 11in)
  
  // Create two-column layout using grid
  #layout(grid: (
    columns: 2,
    gutter: 0.5in, // space between columns
  ))
  
  // Example content
  = My Two-Column Document
  
  This is some text in the first paragraph. It will flow into two columns across the page.
  
  == Section One
  
  Here is more content that will automatically be distributed between the two columns. Typst handles the balancing and splitting of content across columns for you.
  
  == Section Two
  
  You can continue to add as much content as you'd like, and it will remain in the two-column format automatically.
------------------

This failed to compile because of a single error with `#layout`. Still, Typst errors are clear, and I could've probably fixed it easily myself by referencing the documentation.

Then I tried the same model with my custom generic programming system prompt, and it produced this:

------------------

  #set page(
    width: 8.5in,
    height: 11in,
    margin: 1in,
    columns: 2,
    column-gap: 0.5in,
  )
  
  = Title
  
  Here is some example text in the first paragraph. It will flow into two columns automatically.
  
  == Section
  
  More text continues here, and Typst will manage the column layout accordingly.
------------------

Which is better, but still failed to compile because `column-gap` is not a valid argument. Simply removing it compiled without errors.

I still would've consulted the official documentation to determine whether this is idiomatic code or not, but these are not terrible results. As with all LLMs, you'll get the most out of them if you use them as assistants, rather than vibe coding tools.


Yep this is how I started my Typst journey. I was intimidated by Typst at first and wanted to do some mildly complicated documents that really isn't covered by the tutorial, so I had ChatGPT generate elements of the document I need. Now I'm a more self-sufficient being able to generate functions and use more complicated features of Typst and better exercise use of the docs.


> but I found that LLMs do a pretty good job of generating Typst code.

Interestingly, I've had the opposite experience. ChatGPT and Claude repeatedly gave me errors, apologized profusely, and then said, "ah, I had the wrong keyword. It's actually <blahblah>"--and that would simply give me another error and a subsequent apology.

At least Gemini had the good taste of telling me that it didn't know how to do what I wanted with typst.

It's certainly possible that I was trying to do something a little too unusual (who knows), but I chalked it up to the LLMs not having a large enough corpus of training text.

On the bright side, the typst documentation is quite good and it was just a matter of adjusting example code that got me on track.


Well, that just goes to show that these tools are wildly unpredictable. I've had bad experiences generating Go, whereas I've read many experiences of the opposite.

> I chalked it up to the LLMs not having a large enough corpus of training text.

I'm inclined to believe the opposite, actually. It's not so much about the size of the training data, but the quality of it. Garbage in, garbage out. Typst is still very young, and there's not much bad code in the wild.

And the language itself plays a large role. A simple language with less esoteric syntax and features should be easier to train on and generate than something more complex. This is why I think LLMs are notoriously bad at generating Rust code. There's plenty of it to train on, but Rust is a deep pit of complexity and unusual syntax. Though it helps when the language is strict and statically typed, so that the compiler can catch issues early. I would dread relying on generated Python code, despite of how popular and simple it is on the surface.


TinyMist is a great alternative to the online editor for local development in VS Code / Cursor https://myriad-dreamin.github.io/tinymist/


Yeah, that's also what I've been using, and yes it is very good. Thank you for bringing it up


This is a great example of the open core model done right. Have a fully-featured F/LOSS product, and build value-add commercial products and services on top of it.

I've also only used the CLI tool, and didn't miss any features from it. The commercial product was never pushed or promoted to me. I personally have no need for it, and I'm only vaguely aware that it exists. But I'm sure that people who do need the friendlier UI/UX and more advanced features would be willing to pay for it, so I'm glad that the team has a stable source of income that enables them to continue maintaining the project in the long-term.

Looking at the pricing page now... Wow, the plans are quite generous and affordable. Way to go!


I don't really need the web version, but I pay for a yearly subscription to support development of Typst.

I do find the web version handy to share Typst examples and on occasion work on a document while syncing with private Github repository.


It’s a great example how many open source projects start … until they change.


I mean, we can be cynical about it, or we can acknowledge the fact that running a sustainable business around OSS is entrepreneurship on hard mode.

Yes, many companies start with good intentions which then change at some point, but there have also been companies that have managed to successfully balance both sides.

Grafana comes to mind, as well as ClickHouse, and TimescaleDB. I'm not as familiar with the latter two, but Grafana is certainly a good example. You can probably find some blemishes even on their record, but overall, I would say they have been excellent stewards of OSS. Especially considering the large amount of products they maintain.

So far, Typst seem to be on the right track as well, which is worthy of praise.


Converted to Typst last year from LaTeX for book authoring, invoices, and slides (was using a hand-rolled rst2ppt tool for slides). Happy to never touch LaTeX again. Typst is that good.


isitreallyfoss.com did a break down on typst that goes more in-depth

https://isitreallyfoss.com/projects/typst/

It seems mostly fine except for this bit:

> The compiler includes a package manager “Typst Universe” that may connect to servers owned and operated by Typst GmbH


So do most package managers...


I think most package managers take the repository urls from config, with typst its hardcoded in the binary (right?).


I LOVE Marp! Why do you like Typst more than Marp for presentations?


I much preferred Marp to PowerPoint, but there were several parts of it I wasn't fond of:

- Using CSS for formatting resulted in a lot of one-off rules, and was a lot noisier and less readable than the equivalent in Typst.

- The use of CSS for formatting also meant that the Marp compiler couldn't catch most of my silly mistakes. With Typst the compiler will catch those mistakes.

- Using a plugin to selectively highlight lines required writing a custom "engine" in JS, which was a pain to get working. Using a package in Typst is extremely simple.

- And I had to use npm to install the plugin in the first place. Typst comes with a package manager built-in.

- Generating PDFs required that I installed Chrome/Chromium. Typst does that out of the box.

The only place, that I can think of, where Marp is ahead of Typst, is with regards to generating HTML based presentations. But that probably won't be the case forever, and I personally always use PDFs for the final presentation, since that means that a lot less can go wrong. Especially so if I am not using my own PC when giving the presentation


The online editor is extremely useful for quick projects with other people where real-time editing works better than git, and where people don't want to download tools.


Speaking of core product and online editor: Over the decades many of the products I worked on developed some form of reporting feature.

Not alway, but often they required PDF output. Not always, but often they ended up being LaTeX based, with all the nice and some of the ugly consequences. Especially the security story was never great.

Does anyone know how hard it would be to integrate the Typst renderer into an existing Rust product?


I embed the typst binary in a go binary deployed to cloud run. I use this to generate pdfs on the fly.

I need to generate a 2 page invoice and i can generate it under 100 ms. IIRC, it's easier to integrate with rust. Since the pdf rendered is written in rust


Not particularly difficult. The main Typst crate should have you covered. I've seen quite a lot of projects that do it already.


> I originally picked up Typst as yet another replacement for PowerPoint

I’ve also mainly used it for slides so far. Can recommend Slydst for that.


I've been using touying so far. It took some effort, since I was learning Typst at the same time, but I was able to convert our "official" PowerPoint template to a toying template that I am quite happy with.

From what I can tell, Slydst seems intended for more minimalist slides. But it looks nice, so I'll have to keep it in mind for cases where I don't need the above template


Did you ever use beamer? I've done a few presentations with it, and while I wouldn't call it great, I preferred it to PowerPoint or keynote


How do you translate a bit more visually complex presentation to Typst? Should I create my iconography in SVG and try to position those?


What do you mean by a "more visually complex presentation"? Typst has some built in support for drawing shapes [1], but if you need more complex figures then cetz is also an option [2].

But if you mean animations (including animated transitions), then I do not believe that it is possible in Typst, since the output it outputs PDFs. I also do not believe that it is possible to embed multimedia in a document.

[1] https://typst.app/docs/reference/visualize/

[2] https://typst.app/universe/package/cetz/


It's cool that nanopore technologies are getting this affordable, but keep in mind that these technologies (to my knowledge) still have very high error rates compared older sequencing techniques. Both in terms of individual nucleotides (A, C, G, and Ts) being misread, but also in terms of stretches of nucleotides being mistakenly added to or removed from the resulting sequences (indels).

So, yes, you can sequence your genome relatively cheaply using these technologies at home, but you won't be able to draw any conclusions from the results


With the recent R10 flow cells the error rate has improved. The basecalling models have also been steadily improving and therefore reducing the error rate.

For assembling a bacterial genome the consensus error rate is as low or in some cases better than Illumina.

Nanopore platform has its usecases that Illumina falls short on.

> So, yes, you can sequence your genome relatively cheaply using these technologies at home, but you won't be able to draw any conclusions from the results

Agreed, any at home sequencing should not be used to draw any conclusions.


That's a prevalent misconception even in the scientific community. Sure, each read has 1% incorrect bases (0.01). But each segment of DNA is read many times over. More or less 0.01^(many times) ≈ 0 incorrect bases.


The author got less than 1x coverage for their efforts. To get the kind of coverage required for reliable base-calls, you need significantly higher coverage, and therefore a significantly higher spend


> That's a prevalent misconception even in the scientific community. Sure, each read has 1% incorrect bases (0.01). But each segment of DNA is read many times over. More or less 0.01^(many times) ≈ 0 incorrect bases.

That's true in targeted sequencing, but when you try to sequence a whole genome, this is unlikely.


> That's true in targeted sequencing, but when you try to sequence a whole genome, this is unlikely.

Whole-genome shotgun sequencing is pretty cheap these days.

The person you are replying to doesn't give any specific numbers, but in my experience, you aim for 5-20x average coverage for population level studies, depending on the number of samples and what you are looking for, and 30x or higher for studies where individuals are important.

For context, coverage refers to the (average) number of resulting DNA sequences that cover a given position in the target genome. Though there is of course variation in local coverage, regardless of your average coverage, and that can result in individual base-calls being being more or less reliable


I’m referring to the experiment done in the OP - the most I’ve read about from an minION flow cell is 8 Gb (and this is from cell line preps with tons of DNA, so the coverage isn’t great).

You need multiple flow cells or a higher capacity flow cell to get anything close to 1X on an unselected genome prep.

Shotgun sequencing isn’t probably what you meant to say - this is all enzymatic or, if it’s sonicated, gets size selected.


What the person you replied to described read like short read sequencing with PCR amplification to me ("each segment of DNA is read many times over"), rather than nanopore sequencing. My reply to you was written based on that (possibly false) assumption.

But if we are talking nanopore sequencing, then yes, you need multiple flowcells. Which is not a problem if you are not a private person attempting to sequence your own genome on the cheap


There wasn’t enough information to tell (on my 1 minute scan) which nanopore kit was used, but the presence of PCR does not imply short reads.

You can do nanopore PCR/cDNA workflows right up to the largest known mRNAs (13kb).

Edit:

I’m not sure if you’re saying that you can’t do a 5/20/30X genome on nanopore - that’s also not true. It only makes sense in particular research settings, of course.


PCR is central to short read sequencing... as well as Roche 454, and can also be used in some protocols in Nanopore.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3849550/

You can sequence a human genome on a MinION - but you need to purchase 5 flow cells to reach 11X (if they are used correctly).. https://nanoporetech.com/news/news-human-genome-minion


> PCR [...] can also be used in some protocols in Nanopore.

Yes... and that is why I said that the presence of PCR does not imply short reads.

> You can sequence a human genome on a MinION - but you need to purchase 5 flow cells to reach 11X (if they are used correctly).. https://nanoporetech.com/news/news-human-genome-minion

THat's why I said that 5/20/30X coverage is possible in the appropriate research setting


My comment was with NanoPore in mind.


I worked with Nanopore data about four years ago, and I found that that's mostly true, but for some reason at some sites, there was systematic errors where more than half of reads were wrong.

I can't 100% prove it wasn't a legit mutation but our lab did several tests where we sequenced the same sample with both Illumina and Nanopore, and found Nanopore to be less than perfect even with exteme depth. Like, out depth was so high we routinely experienced overflow bugs in the assembly software because it stored the depth in a UInt16.


What was the DNA source? At the same time (4 years ago) there were issues with specific species - Birds and some metagenome species were the worst if I remember correctly.


Influenza virus


> Eh? Linus has called it "experimental garbage that no one could be using" a whole bunch of times, based on absolutely nothing as far as I can tell.

Where did Linus call bcachefs "experimental garbage"? I've tried finding those comments before, but all I've been able to find are your comments stating that Linus said that


I'm not sure I follow. Couldn't you display text that stands still by (re)drawing the outline of the text repeatedly? It would essentially be a two frame animation


I think the algorithm in the video is doing a very specific thing where there's a zero-width pixel-grid-clamped stroke (picture an etch-a-sketch-like seam carving "between" the bounds of pixels on the grid) moving about the grid, altering (with XOR?) anything it advances across.

So, sure, you could try to implement this by having a seam that is made to "reverberate" back and forth "across" the outlining pixels of a static shape on each frame. But that's not exactly the same thing as selecting the outline of the shape itself and having those pixels update each frame. Given the way this algorithm looks to work, pushing the seam "inwards" vs "outwards" across the same set of pixels forming the outline might gather an entirely different subset of pixels, creating a lot of holes or perhaps double-counting pixels.

And if you fix those problems, then you're not really using this algorithm any more; you're just doing the much-more-boring thing of taking a list of pixel positions forming the outline and updating them each frame. :)


I believe the algorithm in the video works by flipping the pixel color when the pixel changes from foreground (some shape) to background, or from background to foreground. If the shape doesn't move, there is no such change, so it disappears.

In the OP the foreground pixels continuously change (scrolling in this case) while the background doesn't change. That's a different method of separating background and foreground.


The fact that these formats are unable to represent degenerate bases (Ns in particular, but also the remaining IUPAC bases), in my experience renders them unusable for many, if not most, use-cases, including for the storage of FASTQ data


The question of how to represent things not specified in the original format is a tough one.

At the loosest end a format can leave lots of space for new symbols, and you can just use those to represent something new. But then not everyone agrees on what the new symbol means, and worse multiple groups can use symbols to mean different things.

On the other end of the spectrum, you can be strict about the format, and not leave space for new symbols. Then to represent new things you need a new standard, and people to agree on it.

It's mostly a question of how well code can be updated and agreed upon, how strict you can require your tooling to be w.r.t. formats.


The original FASTA/Pearson format and fasta/tfasta tools have supported 'N' for ambiguous nucleotides since at least 1996 [1], and the FASTQ format has to my knowledge always supported 'N' bases (i.e. since around 2000). IUPAC codes themselves date back to 1970 [2]. You can probably get away with not supporting the full range of IUPAC nucleotide codes, but not supporting 'N' makes your tool unusable to represent what is probably the majority of available FASTA/FASTQ data

[1] See 'release.v16' in the fasta2 release at https://fasta.bioch.virginia.edu/wrpearson/fasta/fasta_versi...

[2] https://iupac.qmul.ac.uk/misc/naabb.html


The problem is IUPAC just exists.


I don't dislike the format, and it is much, much better than what it replaced, but SAM, and its binary sister-format BAM, does have some flaws:

- The original index format could not handle large chromosomes, so now there are two index formats: .bai and .csi

- For BAM, the CIGAR (alignment description) operation count is limited to 16 bits, which means that very long alignments cannot be represented. One workaround I've seen (but thankfully not used) is saving the CIGAR as a string in a tag

- SAM cannot unambiguously represent sequences with only a single base (e.g. after trimming), since a '*' in the quality column can be interpreted either as a single Phred score (9) or as a special value meaning "no qualities". BAM can represent such sequences unambiguously, but most tools output SAM


True. I'd consider these minor flaws. W.r.t. the CIGAR, the spec says you do need to store it as a tag.


Disclosing that you used AI three days after making the PR, after 4 people had already commented on your code, doesn't sit right with me. That's the kind of thing that should be disclosed in the original PR message. Especially so if you are not confident in the generated code


Sounds like a junior vibe coder with no understanding of software development trying to boost their CV. Or at least I hope that’s the case.


I graduated literally 3 months ago so that's my skill level.

I also have no idea what the social norms are for AI. I posted the comment after a friend on Discord said I should disclose my use of AI.

The underlying purpose of the PR is ironically because Cline and Copilot keep trying to use `int` when modern C++ coding standards suggest `size_t` (or something similar).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: