Unfortunately this is not yet a compiler; it's a lexer and a parser. Traditionally these are the first things you present in a compilers course, but from my point of view, that's kind of dumb, for two reasons.

First, a parser isn't a compiler, so this means you get through the whole first part of the course without having a compiler. By contrast, a code generator is a compiler, at least if there's some way to invoke it.

Second, although the theory of formal languages and parsers is complex, well-developed, and fascinating, it's not clear that it's important to building a compiler. People have written large systems in FORTH. MUMPS famously ran a number of hospital systems without being able to break a statement across lines. And a complete scannerless BNF grammar for S-expressions (without readmacros) is something like this:

     ::= + " "* | "(" " "* * ")" " "*
And many, many huge systems have been written in Lisps whose grammars amount to little more than that.

My favorite grammars are PEGs, which accommodate scannerless parsing rather better than LL(1) and LALR(1) parsers, because PEGs have infinite lookahead with worst-case linear-time parsing (at the expense, it must be admitted, of being a huge memory hog). PEGs are also composable in a way that LALR and LL grammars aren't. I wrote a one-page PEG parser generator targeting JS at https://github.com/kragen/peg-bootstrap/blob/master/peg.md a few years back.

But I think an excessive focus on the syntax of a programming language really detracts from what's really revolutionary about programming languages (and compilers), which is their semantics. We have much better theories of semantics now than we had in the 1970s when the traditional compilers course was being laid out. Books like Essentials of Programming Languages can be wonderful introductions to the fuzzier ones, and there's a lot of work in things like Coq and Idris to come up with tractable formalizations. But you don't need much of a theory of semantics to get a simple compiler up and running!

But I haven't ever actually taught a compilers course, so maybe my ideas are miscalibrated about what students would enjoy and find motivating (having a working compiler for a simple language after doing the first problem set) and what students will find difficult out of proportion to any rewards it might bring (getting recursive-descent parsers for complex grammars working, debugging precedence rules, refactoring grammars to eliminate left recursion, encountering unexpected exponential-time parse failures, etc.).

I'm really glad you said this, because it articulates something I've thought for a long time.

The focus on syntax in compilers literature is basically bike-shedding. People focus on syntax because they understand it, but code generation is much harder.

Matt Might's blog is a rare exception.

Do you have any resources on code generators?

I don't know enough about code generation to offer confident recommendations! I mean I really liked Sorav Bansal's dissertation, Peephole Superoptimization, and in recent years there's been a bunch of noise about using SMT solvers. But what's the mainstream? What are the standard techniques, and what are their strengths and weaknesses? I have very little idea.

Still, I don't think it's fair to say without qualification, "Code generation is much harder." Very simple code generation can be done by pasting together canned code fragments (the original meaning of "compiler") and occasionally computing and encoding a jump offset. A very simple code generator like the one in StoneKnifeForth https://github.com/kragen/stoneknifeforth is simpler than the parser needs to be for many popular languages. However, at least with my limited knowledge, it appears to me that parsing is a relatively closed-ended problem — sure, you can work hard to improve your error detection and recovery, and to give more useful error messages, but you're apparently going to get very little return for even enormous efforts at that. Optimization, on the other hand, which is part of code generation, is potentially arbitrarily complex, and you can keep getting good returns on your efforts for quite a long time.

So I would say that the easiest code generation is usually easier than the easiest parsing, unless you have the liberty to choose your language to make it easier to parse. But the hardest code generation is much, much harder than the hardest parsing. (Again, unless you're parsing a language deliberately designed to be difficult!)