Wednesday, June 09, 2010

Let's make a shit JavaScript interpreter! Part one.



Let's make a shit javascript interpreter! Part one.

As a learning exercise, I've begun writing a javascript ECMAScript interpreter in python. It doesn't even really exist yet, and when it does it will run really slowly, and not support all js features.

So... let's make a "from scratch", all parsing, all dancing, shit interpreter of our very own!

Teaching something is a great way to learn. Also writing things on my blog always gets good 'comments', hints, tips, plenty of heart, and outright HATE from people. All useful and entertaining :)

Tokenising

So to start with, we need something to turn the .js files into a list of tokens. This type of program is called a tokeniser.

From some javascript like this:
function i_can_has_cheezbrgr () {return 'yum';};
Into a Token list something like this:
[
{"type":"name",
"value":"function",
"from":0,
"to":8},
{"type":"name",
"value":"i_can_has_cheezbrgr",
"from":9,
"to":28},
{"type":"operator",
"value":"(",
"from":29,
"to":30},
{"type":"operator",
"value":")",
"from":30,
"to":31},
{"type":"operator",
"value":"{",
"from":32,
"to":33},
{"type":"name",
"value":"return",
"from":33,
"to":39},
{"type":"string",
"value":"yum",
"from":40,
"to":45},
{"type":"operator",
"value":";",
"from":45,
"to":46},
{"type":"operator",
"value":"}",
"from":46,
"to":47},
{"type":"operator",
"value":";",
"from":47,
"to":48}
]
Wikipedia has a page on Parsing (also see List_of_unusual_articles for some other background information).

"Tokenization is the process of demarcating and possibly classifying sections of a string of input characters. The resulting tokens are then passed on to some other form of processing. The process can be considered a sub-task of parsing input." -- wikipedia Lexical_analysis#Token page.

We can has vegetarian cheeseburger... but how can we parse javascript?

To the rescue, comes uncle Crockford the javascript guru of jslint fame. He wrote this lovely article: http://javascript.crockford.com/tdop/tdop.html. The ideas come from a 1973 paper called "Top Down Operator Precedence". The Crockford article is great, since it is free, short, and well written javascript. Unlike the 1973 paper it gets the ideas from... which is behind a paywall, long, and uses a 1973 language called "(l,(i,(s,(p))))".

As well as being short and simple... Phil Hassey used "Top Down Operator Precedence" and this article on his journey making tinypy.

Goat driven development



Just as Phil did with tinypy, I'm going to use Goat Driven Development. Well, I'm not even sure what Goat Driven Development is... so maybe not.

Another python using dude, Fredrik Lundh, wrote some articles on "Simple Top-Down Parsing in Python" and Top-Down Operator Precedence Parsing.

Also see Eli Bendersky's article on Top Down Operator Precedence.

So where to begin?

After reading those articles a few times... scratching my head 13 times, making 27 hums, a few haaarrrrs, one hrmmmm, and four lalalas...

light bulb: A brilliant plan!

Eli Bendersky implements a full tokeniser, and parser for simple expressions like "1 + 2 * 4".

Let's copy this approach, but simplify it even more. Our first step is to make a tokeniser for a such an expression. That should be easy right?


A Token data structure.

Uncle Doug Crockford uses this structure for a token.

// Produce an array of simple token objects from a string.
// A simple token object contains these members:
// type: 'name', 'string', 'number', 'operator'
// value: string or number value of the token
// from: index of first character of the token
// to: index of the last character + 1


Here's an example token from above:

{"type":"name",
"value":"i_can_has_cheezbrgr",
"from":9,
"to":28}



Writing the tokeniser

Often a tokeniser is generated... or written by hand.

Fredrik Lundh writes a simple tokeniser using a regular expression.

>>> import re
>>> program = "1 + 2"
>>> [(number, operator) for number, operator in
... re.compile("\s*(?:(\d+)|(.))").findall(program)]
[('1', ''), ('', '+'), ('2', '')]


This is a valid approach... but regexen blow up minds. Instead I'm going to write one using a state machine, in a big while loop with lots of ifs and elses.

Our homework

Write a tokeniser for simple expressions like "1 + 2 * 4". Output a list of tokens like the javascript one does... eg.

{"type":"name",
"value":"i_can_has_cheezbrgr",
"from":9,
"to":28}


Until next time...

Really, I have no idea what I'm doing... but that's never stopped me before! It's going to be a shit javascript, but it will be our shit javascript.

5 comments:

Juho Vepsäläinen said...

Here's my little implementation: http://gist.github.com/460197 .

If you are willing to forget from/to, you can refactor the offset hack out.

tomviner said...

Best blog post title ever :-)

Nick Harbour said...

It seems like a simple LALR parser generator like lexx and yacc would be more than sufficient for this problem. Is there a lack of a viable python implementation?

Juho Vepsäläinen said...

@Nick Harbour: Ned Batchelder maintains a nice listing of Python related parsing tools here.

illume said...

I finally got round to publishing part two.

At this rate, I'll finish this thing just before the aliens take over in 2043.