Writing a Resume in Postscript using Nushell

It's hard to differentiate yourself in today's job market. There are many ways to write a resume. You can use Word, write it markdown or LaTex. Developers may even have their resume in json. But at the end of the day, Every resume must be converted over to PDF.

What if you are a bit crazy and paranoid that the ATS drops too many essential key words when converting from doc to pdf? Or you simply want to over-engineer your resume to demonstrate mastery over critical thinking skills. You could learn LaTeX, but why not go over-board and learn postscript?

What is postscript?

For those who don't know postscript is a Turing complete language that was the predecessor to PDF. It is quite counter-intuitive to learn for its stack-based operations and postfix notations (i.e. [2 3 add] replaces [2 + 3]). But it is highly versatile at drawing rather complex mathematical curves in the same language as it is at rendering a resume.

Writing a postscript resume from scratch is an extremely arduous task, as you would have to re-align the new lines with the same stack based syntax which would produce an ugly mess that is uneditable and unreadable.

To print a simple hello world in postscript, a few things have to be managed:

%!PS

/ORCA 15 selectfont

50 750 moveto
(HELLO WORLD!) show        

Notice how you must declare the font and font size and the position from the bottom left hand convention in order to place Hello World at the top of the page? Yes, stack-based programming is extremely tedious. When you want to compile this and view it in pdf, simply:

ps2pdf helloworld.ps; mupdf helloworld.pdf        

JSON resume schema

There is a json schema (https://jsonresume.org/schema) which exists for developers who want to show off their versatility at rendering their resume in a variety of website templates and really flex their muscle at creativity and good looks.

But one caveat of using these website templates is that javascript can mangle a lot of the content that resume machine reading algorithms perform --- the ones that especially are tuned to reading Word Document converted PDFs.

But nevertheless, having a resume in a data structure format is a very rational approach for editing and harnessing some nice features of querying data and rendering elements in a procedural compilation process.

Computer Build Process

Compiling a resume into PDF is similar to how you compile a program into an .exe or and ELF-binary executable file. There is source code and imports involved which is sent to a compiler to interpret the language to eventually output raw bytes.

Often times there is an intermediary format that is translated into prior to building the final executable and in the case of C, Rust and many programming language that is LLVM which itself is some kind of strange hybrid between assembly and human readable code. It has a lot of token optimizations under the hood to make machine readable bytes run a tiny faster given that we don't construct for loops the way a machine thinks and therefore it manages the system resources a little more efficiently like making the machine registers for your memory stack and heaps.

I have looked into various tools for automating such builds but there was always something very inelegant about them in one form or another. LaTeX requires several gigabytes to have a useful packages and plugins and fonts to do anything and the HTML to pdf standard for pandoc pdf engines are problematic given that XML meta data embedded in PDFs can mangle your content. I have experimented with building using nix a full-fledged tool to reproducibly build and optimize my pdf resume but I felt the 30 seconds it takes for Nix to go through evaluation is simply the wrong tool for this job. Nix would be probably more sensible if the the document that needed to be compiled was something big as wikipedia so its evaluation step only atomically builds what is changed and it is also not a very portable data format for resume to be written in Nix lang especially when it doesn't have sensible data tools to insert and edit.

A shell build process

On unix systems like MacOS and Linux, most installations are done through shell scripts and they are quite readable and maintainable but I wasn't going to write bash script to pepper jq all over the place. That's like orbitally controlling three languages, bash, jq and postfix. This is where I find Nushell to be the perfect companion for this task but in order to do so, I would have to write some functional helpers to cradle the reverse Polish postfix notation of postscript.

We do need to download and install ghostscript for interpreting postscript and also providing the utilities to convert postscript to pdf. Because postscript is a logical predecessor to PDF, the translation is a little more one-to-one than XML.

Using the template json example from the json schema website we can open it in nushell:

export def importjson [f:string] {
  open $f
}        

Now in order to create a new domain-specific languge *DSL for resume rendering in postscript, we need to declare things in functions so when it renders the data, the pointer arithmetic is calculated for us. For a standard A4 paper, the margins and top declarations is prepared like so:

let lm = 50
  let top = 842 - $lm
  let A4 = $"($lm) ($top) moveto"         

For this, rmoveto is a useful function because after printing a string (popping it from the stack using show), you can move relatively from where you left off. Given that a resume document is a serial data stream, you only need to selectively continue or move down a line.

let struct = {
    | l:list,s | $l
      | enumerate
      | each { |x| $" 0 -(do $fibfont $s) rmoveto " + (do $render $x) + $" -(($x.item | to text | str stats | get chars) * (do $fibfont $s) * 3 / 5) 0 rmoveto" }
      | prepend (do $font $s)
  }        

Furthermore, I will only declare one font size for the entire document and use the fibonacci ratio to downscale or upscale for other fonts, this is kind of the magic:

let fontsize = 30
  let gr = (1 + (5 | math sqrt)) / 2
  let fibfont = {|s:int| ($fontsize / $gr ** $s) | into int }
  let font = {|s:int|$"/ORCA (do $fibfont $s) selectfont"}         

This setup allows us to create intermediate functions like the following:

let font0 = {|x| do $se (do $struct $x 0)}
  let font1 = {|x| do $se (do $struct $x 1)}
  let font2 = {|x| do $se (do $struct $x 2)}
  let font3 = {|x| do $se (do $struct $x 3)}
  let font4 = {|x| do $se (do $struct $x 4)}        

The most annoying thing I had to write which is a bit of a hack is the following string escape:

  let se = { | x | $"\) ($x | to text) \(" }        

What this does, and it's quite useful when trying to keep data locally without invoking memory that is rendered by a nu 'each' statement. An illustration of this is:

[ "Work" ") <rmoveto or font change> ("  "Work2"]
==> (work) <rmoveto or font change> (work2)        

This gives us a very elegant, imho, readable DSL:

$A4
  | append (
    [$cv.basics.name]
      | do $struct $in 0
    )
    | append (
      [
        $cv.basics.phone
        $cv.basics.email
        $cv.basics.location.address
        $cv.basics.location.city
        [$cv.basics.location.countryCode $cv.basics.location.postalCode]
      ]
    | do $struct $in 2
    )
  | append (
    $cv.work
      | each {
        |x|
          | append (do $font1 [$x.name] )
          | append (do $font2 [$x.position] )
          | append (do $font3 [[
            "from" $x.startDate
            "to" $x.endDate
          ]] )
          | append (do $font3 $x.highlights )
        }
      | do $l2psstr $in
    )
  | append (
    $cv.projects
      | each {
        |x|
          | append (do $font1 [$x.name] )
          | append (do $font3 [[
            "from" $x.startDate
            "to" $x.endDate
          ]])
        }
      | do $l2psstr $in
    )
  | append (
    $cv.education
      | each {
        |x|
          | append (do $font1 [$x.institution])
          | append (do $font2 [$x.studyType])
          | append (do $font3 [[
            "from" $x.startDate
            "to" $x.endDate
          ]] )
        }
      | do $l2psstr $in
    )
  | append (
    $cv.languages
      | each {
        |x|
          | append (do $font1 [$x.language])
          | append (do $font2 [$x.fluency])
        }
      | do $l2psstr $in
    )
  | str replace --all "\(\n\) show" ""
  | str replace --all "\(\)" ""
  | save resume.ps -f        



Article content
a render of an example resume

I am not suggesting that this resume will win design awards but allows a pathway to reasonably render json data into a pdf "report" with some of the conveniences of postscripts 2D rendering abilities by autocalculating fontsizes a la fibonacci ratio.

This isn't a resume that is meant to impress at its current form, but later iterations will make it extremely versatile and it's the direction I am going so salience can be font size scaled with AI generation design.

an update, you can also use pure pipes without nushell pre-parsing postscript and just build postscript directly with enscripten: echo "hello world" | enscript -p - | ps2pdf - report.pdf

More stack based computing; wonderful! It's at the heart of wasm too...

Like
Reply

To view or add a comment, sign in

More articles by Matt Ji

  • 100 MB of storage with Nushell SDK

    I've made a simple Nushell SDK for the https://getpantry.cloud service where you can have a JSON store of 100 bins of 1.

  • Server-side WASM APIs is a Social Contract of the Web

    A tiny little experiment thanks to a free account at pythonanywhere.com can go a long way.

  • Ugh, compromises that lead to innovation

    Recently, I switched my resume into typst. And something I am looking forward to doing is compiling glyphs to replace…

    2 Comments
  • I've built my resume in Typst

    There are many ways to make a resume nowadays. But I found the coolest way to differentiate yourself in the job market…

  • TOTP without having to pay for premium Bitwarden

    Here is how you can get totp to work with bitwarden without having to pay for premium. Using nushell as a way to parse…

  • How to make your own offline AI browser to organize your browse history

    Requirements: An existing browser (firefox, chrome, or safari) Obsidian Web Clipper browser extension Ollama any local…

    1 Comment
  • Nushell Youtube Script

    There is a neat youtube script that allows you to fuzzy search youtube api from really simple tools and it looks…

  • MCP design: Decoupling context and Model Strength

    Many companies are getting on the bandwagon of Anthropic's tool calling API for LLMs called Model Context Protocol…

  • Kelly Criterion in multiple languages

    The kelly criterion is an equation that helps risk managers and decision-makers assess resource allocation on a…

  • Nushell > Bash scripting

    Scripting can be easy or hard depending on what kind of conditions or arguments need to be passed into the function…

Others also viewed

Explore content categories