Welcome to part 2 of building an interpreter in pure JavaScript.

`In the previous article, we introduced the language ``printly`

, how to build a lexer for it, and explained its syntax. Lexer allowed us to convert the code to tokens which are much more manageable for us to read. That is mainly because we do not need to think about many things like case sensitivity, how many spaces there are between each thing we need, etc.

At this point, we are ready to go to the next step. So what are we doing now?

Now, we parse.

## Building a parser

A parser is an algorithm that receives the tokens, runs them against the language's grammar, and makes sense of them. In the end, we get a nice array of structured statements which we can easily parse and make use of in our interpreter.

As you see from the sentence, we have two things we need to create:

- Grammar for our language to verify that the code was written correctly.
- Parser to transform the tokens into structured statements.

So to start, let's take our tokens from the previous part as an example:

```
1[
2 {
3 type: 'name',
4 value: 'variable',
5 start: 0,
6 end: 8,
7 line: 0,
8 character: 0
9 },
10 {
11 type: 'operator',
12 value: '=',
13 start: 9,
14 end: 10,
15 line: 0,
16 character: 9
17 },
18 {
19 type: 'number',
20 value: '5',
21 start: 11,
22 end: 12,
23 line: 0,
24 character: 11
25 }
26];
```

For parsing, we need to take these tokens, verify them against grammar and transform them into a statement.

So how do we detect that?

One implementation would be to read the token, then try to find a corresponding rule, apply it and then transform. It would be really hard to match all of that because we have an infinite number of combinations between code parts. You can have an assignment that calls a function, performs addition, and does a boolean check, in infinite variations.

So, that approach will not work. How about we approach the problem differently?

What if we create the rules first, declare one of our rules a start rule then try to match as much as we can? As long as the rules match we can go as deep as needed until we match everything. It sounds complicated at first, but it will get clearer as we continue.

And a set of these rules will also make grammar for our language.

## Defining the grammar of our language

Our grammar needs to be concise and understandable. We could use words to explain it, but we are programmers so we will complicate it with a notation. :)

We will invent a notation for our language:

```
// Rule is matched only if SubRule1 and SubRule2
// match in this exact order.
// This is our start rule.
Rule -> SubRule1 SubRule2
// SubRule1 is matched if there is SubRule4 zero or infinite times
SubRule1 -> SubRule4*
// SubRule4 for is matched if there is either rule TokenA or rule (TokenB TokenC) exactly matched.
SubRule4 -> TokenA | (TokenB TokenC)
// SubRule2 is matched if rule TokenC optionally matches (it can match or not), and rule TokenD matches 1 or infinite times.
SubRule2 -> TokenC? TokenD+
// TokenA rule matches if the current token is of type 'keyword' and value is 'if'
TokenA -> <Token('keyword', 'if')>
// TokenB rule matches if the current token is of type 'number'
TokenB -> <Token('number')>
// TokenC rule matches if the current token is of type 'operator' and value '!'
TokenC -> <Token('operator', '!')>
// TokenD rule matches if the current token is of type 'name'.
TokenD -> <Token('name')>
```

In this notation, we start by trying to satisfy the rule `Rule`

and then go deeper into each of the sub-rules until we reach the Tokens. Once we find a token and the token matches what we specified we go back up and say that the rule matched. The rules are each resolved recursively until we reach the token.

In our explanation, we will always read top to bottom to make things simple.

In this notation, you can see the quantifiers. These decide how many times each rule check we will apply:

`*`

- Zero or infinite times, as many times as it can be matched`+`

- One or infinite times, as many times as it can be matched`?`

- Zero or one time

We will also use parentheses `(`

and `)`

for grouping the rules together.

With this information, we are ready to write out the grammar for `printly`

:

```
// Start rule of our grammar,
// we will match this until we reach the end of our token list.
// This will essentially return us each statement per line.
LineStatement -> IfExpressionStatement | AssignmentStatement | FunctionStatement
// If expression has expression check
// and a code block containing multiple LineStatements
IfExpressionStatement -> IfKeyword PStart Expression PEnd CodeBlock
// Code block can have zero or more line statements
// in this case our grammar goes from the top for each
// line statement but we specified it as zero because
// we can have empty if statements too.
CodeBlock -> BStart LineStatement* BEnd
// Our function statement is a rule for FunctionExpression plus
// end of the line.
FunctionStatement -> FunctionExpression Eol
// We separated this rule into its own because we can use
// function in other parts of the expressions (or even othe function calls)
// so we need to have a separate version without end of line token.
// Our function parameters are optional because we can have
// functions with zero parameters.
FunctionExpression -> Name PStart FunctionParameters? PEnd
// Function parameters cam have one expression or multiple
// expression separated by a comma. Comma is grouped
// inside parentheses because we only need it if we
// have more than one parameter.
FunctionParameters -> Expression (Comma Expression)*
// And finally the assignment can also be an expression
// and we need a Name token for the variable name
// and an Equal sign to complete the statement
AssignmentStatement -> Name Equals Expression Eol
// And here are the rules we have below
// which resolve directly to tokens.
// After our parser reaches these tokens it does not
// need to look deeper.
Name -> <Token('name')>
Equals -> <Token('operator', '=')>
Comma -> <Token('comma')>
BStart -> <Token('codeBlockStart')>
BEnd -> <Token('codeBlockEnd')>
PStart -> <Token('parenStart')>
PEnd -> <Token('parenEnd')>
IfKeyword -> <Token('keyword', 'if')>
Eol -> <Token('endOfLine')>
```

If you read this carefully, you will see that the rule `Expression`

is missing. We omitted it on purpose, but we will define it to keep it in a separate part to keep things understandable.

Expression in our case means every kind of expression which can be inside an if check, an assignment, and a function parameter. We can also have an expression inside expressions. And aside from this, we have one more problem to deal with - operator precedence. Yep, you heard right, we need to teach our language some math and logic. :)

Operator precedence allows us to evaluate some operators before others. This is the case with our arithmetic operator where multiplication `*`

and division `/`

operators need to come before addition `+`

and subtraction `-`

. We also have things where logic operators need to be at the bottom so that everything works as we expect.

Here is the full table of our operator precedence from the least important to most important.

### Operator / Description

**&&, ||** *And and Or logical operators*

**==, !=** *Equality logical operators*

**+, -** *Addition and subtraction operators*

***, /** *Multiplication and Division operators*

**!** *Not logical operator*

**()** *Group operator*

For the purposes of our language and to keep things simple, all our operators are read from the left to right (also known as left-associative).

So with this in mind, here are the rules for `Expression`

:

```
// We start with detection of && and || logical operators
// Grouping ((And | Or) EqualityTerm)* like this allows us to
// three or more operands like a || b && c || d
// Note that in our language these two operators are having
// the same priority, but usually in a lot languages operator
// && has higher priority above ||. We are keeping things simple though.
Expression -> EqualityTerm ((And | Or) EqualityTerm)*
// This allows us to detect == and != equality expressions
// the reasoning for this rule is the same as the previous one.
EqualityTerm -> RelationTerm ((DoubleEquals | NotEquals) RelationTerm)*
// This allows us to detect >, <, >=, <= boolean expressions
EqualityTerm -> AddSubTerm ((Less | Greater | LessEquals | GreaterEquals) AddSubTerm)*
// This allows us to detect + and - math expressions
AddSubTerm -> MulDivTerm ((Add | Subtract) MulDivTerm)*
// This allows us to detect * and / math expressions
MulDivTerm -> UnaryTerm ((Multiply | Divide) UnaryTerm)*
// This allows us to define ! negation boolean of a factor
UnaryTerm -> Not? Factor
// Factor allows for the left or the right side to be
// a number, string or another expression in parentheses
// If this is another expression in parentheses we go back
// up to Expression and check the rules for that expression
// as many times as we need to for our code
Factor -> GroupExpression | FunctionExpression | NumberExpression | VariableExpression | StringExpression
// Group expression allows us nest additional
// expressions and it has the highest precedence
// in expression so we can use it to clarify
// specific parts of an expression.
GroupExpression -> PStart Expression PEnd
// These last three are alias rules for
// Name, Number and String. Yes we could use
// them directly but we will need this for something else
// later in the code implementation.
VariableExpression -> Name
NumberExpression -> Number
StringExpression -> String
// And finally we need to define rules for tokens:
And -> <Token('operator', '&&')>
Or -> <Token('operator', '||')>
DoubleEquals -> <Token('operator', '=')>
NotEquals -> <Token('operator', '!')>
Less -> <Token('operator', '< ')>
Greater -> <Token('operator', '> ')>
LessEquals -> <Token('operator', '<= ')>
GreaterEquals -> <Token('operator', '>=')>
Add -> <Token('operator', '+')>
Subtract -> <Token('operator', '-')>
Multiply -> <Token('operator', '*')>
Divide -> <Token('operator', '/')>
Not -> <Token('operator', '!')>
String -> <Token('string')>
Number -> <Token('number')>
```

## Implementing the parser

Now, when we wrote the grammar, we can start implementing our parser.

How to implement this? We can manually create functions that call functions

starting from the `LineStatement`

rule and going down all the way until we reach the token.

That would result in a lot of code and duplication and a lot of bugs to deal with.

How about we try something simpler? Let's try to implement a way for us to define the rules exactly as per grammar. Of course, we will need some helper functions in

JavaScript to make that happen but we can do this.

But before that, we need a way to read through tokens and keep the state.

For that, we will need a similar thing we used in a lexer, a `TokenReader`

class :)

```
1class TokenReader {
2 constructor(tokens) {
3 this.tokens = tokens; // store tokens for further use
4 this.position = 0; // current position in token list
5 this.stateStack = []; // state stack so that we can rollback if we do not match something.
6 }
7
8 // Push current state to the stack
9 // this allows us to go back to this state
10 // if we do not match anything
11 pushState() {
12 this.stateStack.push(this.position);
13 }
14
15 // Restore last pushed state
16 // we will call this when we read as far
17 // as we could but we didn't match what we need.
18 restoreState() {
19 this.position = this.popState();
20 }
21
22 // Pops the last state from the list and returns it.
23 // We will call this when we need to lose the
24 // last saved state when we matched something and we
25 // do not need to go back.
26 popState() {
27 return this.stateStack.pop();
28 }
29
30 // Checks whether the current token is of a
31 // specific type or not.
32 isType(type) {
33 return this.hasNext() && this.getType() === type;
34 }
35
36 // Returns the type of the current token.
37 getType() {
38 return this.get().type;
39 }
40
41 // Returns the value of the current token.
42 getValue() {
43 return this.get().value;
44 }
45
46 // Checks whether the value in the current token
47 // matches.
48 isValue(value) {
49 return this.getValue() === value;
50 }
51
52 // Returns the token at the current position.
53 get() {
54 return this.tokens[this.position];
55 }
56
57 // Returns the very last token in the list.
58 getLastToken() {
59 return this.tokens[this.tokens.length - 1];
60 }
61
62 // Move to the next token in the position.
63 next() {
64 this.position++;
65 }
66
67 // Check and return whether there are more tokens to
68 // consume.
69 hasNext() {
70 return this.position < this.tokens.length;
71 }
72}
```

Now that we have a token reader, let's implement a few functions which will allow us to define grammar rules.

First, we need to implement the function which defines a rule:

The next one is a function to match a set of tokens exactly in that order. We will call it... `exactly`

:)

```
// Exactly is a function which returns a function
// which exactly runs each of the checks against
// a reader. If just one of the checks fail
// the whole function will return a null meaning
// that we couldn't match anything.
const exactly = (...checks) => reader => {
// First we store the current position in the
// token list so that we can go back if we don't
// match what we need.
reader.pushState();
const results = [];
for (const check of checks) {
const match = check(reader); // Run the check against the reader
if (!match) {
// Didn't get a match so we
// restore the position in the token
// and exit the function returning null, meaning no match.
reader.restoreState();
return null;
}
// We found a match so we add it to our
// results list
results.push(match);
}
// We drop the state we saved because we found all matches
// by this point so we do not need the
// saved state anymore.
reader.popState();
// We return the matched items
// so that they can be transformed as needed.
return results;
};
```

After `exactly`

, we need to implement an ability to check either Rule A or Rule B (the `|`

sign in the grammar) and match if either one

of them are matched.

Yep, you guessed it, we will call the function `either`

:

```
1// Either returns a function which
2// runs against a reader and checks each
3// of the passed checks and returns the first
4// one which matches.
5const either = (...checks) => reader => {
6 for (const check of checks) {
7 reader.pushState(); // Store the state so that we can go back if we do not match.
8
9 const match = check(reader);
10 if (match) {
11 // We found a match here
12 // so we remove the stored state
13 // as we do not need it.
14 reader.popState();
15
16 return match; // We return the found match.
17 }
18
19 // We didn't find a match here so
20 // we restore the reader to previous state
21 // so that next check in line
22 // can be checked on the same position.
23 reader.restoreState();
24 }
25
26 // We didn't find anything at this point
27 // so we return null.
28 return null;
29};
```

Now, we will add a way to define a part of the value optional (denoted by `?`

sign in the grammar). We will call the function`optional`

:

```
1// Optional function returns a function which works on a
2// token reader, runs a check and returns a value
3// denoted by defaultValue if the check does not match.
4// Returning a defaultValue other than a null allows optional
5// to always return something even if the check fails
6// thus making the check optional. :)
7const optional = (check, defaultValue = {type: 'optional'}) => reader => {
8 reader.pushState(); // we store the state before the check
9 const result = check(reader);
10
11 if (!result) {
12 // Our check failed
13 // so we restore the previous state
14 reader.restoreState();
15 } else {
16 // we had a match so we
17 // dont need to keep the stored state anymore
18 reader.popState();
19 }
20
21 // Here we return the match or the default value
22 // as long as default value is not null this would
23 // make the result optional.
24 return result ? result : defaultValue;
25};
```

After the `optional`

check, we need to allow a check for zero-or-more `*`

and one-or-more `+`

rules. We will create one function to handle both of these. We will call it `minOf`

. We will pass the amount we want to check (0, 1, or even more) to it, to denote what is the minimum amount of checks needed for this rule to pass.

```
1// minOf returns a function which works on a token
2// reader which performs a check for a minimum amount
3// up to infinity if a check fails for a minimum
4// amount, null is returned, anything after a minimum
5// is optional.
6const minOf = (minAmount, check) => reader => {
7 reader.pushState(); // first we store the current state.
8
9 const results = [];
10
11 let result = null;
12
13 while(true) {
14 // we run checks as many times
15 // as we can in this loop.
16 result = check(reader);
17
18 if (!result) {
19 if (results.length < minAmount) {
20 // result was not found and we
21 // didn't reach the minimum
22 // amount so the matching is a failure
23 // and we restore state before
24 // return a null.
25 reader.restoreState();
26 return null;
27 }
28
29 // We didn't find a match
30 // so we do not need to be
31 // in this loop anymore so we exit it.
32 break;
33 }
34
35 results.push(result);
36 }
37
38 // We reached the end here so
39 // we do not need the state anymore
40 // so we remove it and return the results.
41 reader.popState();
42 return results;
43};
```

And finally, we need a way to define a token rule (also known as a terminal symbol in our case) in our grammar. We will define a function token:

```
1// Token function returns a function
2// which works on a token reader,
3// checks the type of the token and (if specified)
4// a value of the token.
5const token = (type, value = null) => reader => {
6 // first we check if we have
7 // a value set and value matches
8 // in our token reader
9 // if we didn't set a value parameter this
10 // variable is always true.
11 let valueMatches = value ? reader.isValue(value) : true;
12
13 if (reader.isType(type) && valueMatches) {
14 // Our type is correct and value matches
15 // so we return the matched token at this point.
16 const result = reader.get();
17
18 // this is also the only time we move to the
19 // next token in the list, and it is because of this
20 // that we need to push and pop the reader state
21 // because if we do not go back on failures, we will not be
22 // able to match everything correctly
23 reader.next();
24
25 // And finally we return the token result.
26 return result;
27 };
28
29 // Here we didn't find a token we are looking for,
30 // so we return a null.
31 return null;
32};
```

If you want to get up to this part in the repository. Check out `part2-chapter1`

branch.

## Writing out the grammar

Now we are ready to write out the grammar. We will separate the implementation into parts.

We will use destructuring assignments and arrow functions to keep this short and concise so if you are not familiar with these parts of JavaScript, please click on the links to familiarize yourself with them.

We import the functions for our rules and create the first set of rules:

```
1const { rule, either, exactly, optional, minOf, token } = require('./rule-helpers');
2
3// LineStatement -> IfExpressionStatement | AssignmentStatement | FunctionStatement
4const LineStatement = rule(
5 () => either(IfExpressionStatement, AssignmentStatement, FunctionStatement),
6 expression => expression // We do not need to anything with the result here
7);
8
9// IfExpressionStatement -> IfKeyword PStart Expression PEnd CodeBlock
10const IfExpressionStatement = rule(
11 () => exactly(IfKeyword, PStart, Expression, PEnd, CodeBlock),
12 ([,, check,, statements]) => ({ type: 'if', check, statements }) // We transform the result into an if statement
13)
14
15// CodeBlock -> BStart LineStatement* BEnd
16const CodeBlock = rule(
17 () => exactly(BStart, minOf(0, LineStatement), BEnd),
18 ([, statements]) => statements // We only take the statements from here at index 1.
19);
20
21// FunctionStatement -> FunctionExpression Eol
22const FunctionStatement = rule(
23 () => exactly(FunctionExpression, Eol),
24 ([expression]) => expression // We only take the result of FunctionExpression at index 0.
25);
26
27// FunctionExpression -> Name PStart FunctionParameters? PEnd
28const FunctionExpression = rule(
29 () => exactly(Name, PStart, optional(FunctionParameters, []), PEnd),
30 ([name, _, parameters]) => ({ // name is at index 0, parameters are at index 2
31 type: 'function',
32 name: name.value,
33 parameters
34 })
35);
36
37// FunctionParameters -> Expression (Comma Expression)*
38const FunctionParameters = rule(
39 () => exactly(Expression, minOf(0, exactly(Comma, Expression))),
40 ([first, rest]) => [first, ...rest.map(([_, parameter]) => parameter)] // We combine first parameters with all of the rest into one array.
41);
42
43// AssignmentStatement -> Name Equals Expression Eol
44const AssignmentStatement = rule(
45 () => exactly(Name, Equals, Expression, Eol),
46 ([name,, expression]) => ({
47 type: 'assignment',
48 name: name.value, // name at index 0
49 expression // expression at index 2
50 })
51);
52
```

Now for the `Expression`

rule:

```
1// We use this functions for all binary operations in the
2// Expression rule because all of them parse the same way
3// this will allow us to create nested operations.
4const processBinaryResult = ([left, right]) => {
5 let expression = left;
6
7 // We need to go through all operators on the right side
8 // because there can be 3 or more operators in an expression.
9 for (const [operator, rightSide] of right) {
10
11 // Each time we encounter an expression we put the
12 // previous one in the left side.
13 expression = {
14 type: 'operation',
15 operation: operator.value,
16 left: expression,
17 right: rightSide
18 };
19 }
20
21 // Finally we return the expression structure.
22 return expression;
23};
24
25// Expression -> EqualityTerm ((And | Or) EqualityTerm)*
26const Expression = rule(
27 () => exactly(EqualityTerm, minOf(0, exactly(either(And, Or), EqualityTerm))),
28 processBinaryResult
29);
30
31// EqualityTerm -> RelationTerm ((DoubleEquals | NotEquals) RelationTerm)*
32const EqualityTerm = rule(
33 () => exactly(RelationTerm, minOf(0, exactly(either(DoubleEquals, NotEquals), RelationTerm))),
34 processBinaryResult
35);
36
37// EqualityTerm -> AddSubTerm ((Less | Greater | LessEquals | GreaterEquals) AddSubTerm)*
38const RelationTerm = rule(
39 () => exactly(AddSubTerm, minOf(0, exactly(either(Less, Greater, LessEquals, GreaterEquals), AddSubTerm))),
40 processBinaryResult
41);
42
43// AddSubTerm -> MulDivTerm ((Add | Subtract) MulDivTerm)*
44const AddSubTerm = rule(
45 () => exactly(MulDivTerm, minOf(0, exactly(either(Add, Subtract), MulDivTerm))),
46 processBinaryResult
47);
48
49// MulDivTerm -> UnaryTerm ((Multiply | Divide) UnaryTerm)*
50const MulDivTerm = rule(
51 () => exactly(UnaryTerm, minOf(0, exactly(either(Multiply, Divide), UnaryTerm))),
52 processBinaryResult
53);
54
55// UnaryTerm -> Not? Factor
56const UnaryTerm = rule(
57 () => exactly(optional(Not), Factor),
58 ([addedNot, value]) => ({
59 type: 'unary',
60 withNot: addedNot.type !== 'optional', // We add this field to know whether we need to invert it or not.
61 value
62 })
63);
64
65// Factor -> GroupExpression | FunctionExpression | NumberExpression | VariableExpression | StringExpression
66const Factor = rule(
67 () => either(GroupExpression, FunctionExpression, NumberExpression, VariableExpression, StringExpression),
68 factor => factor
69);
70
71// GroupExpression -> PStart Expression PEnd
72const GroupExpression = rule(
73 () => exactly(PStart, Expression, PEnd),
74 ([, expression]) => expression
75);
76
77// VariableExpression -> Name
78// Remember this part? We said we will need it. This is why.
79// We need a way to structure variable names, numbers and strings
80// So we created an alias rule where we structure the result tokens.
81const VariableExpression = rule(
82 () => Name,
83 name => ({
84 type: 'variable',
85 name: name.value
86 })
87);
88
89// NumberExpression -> Number
90const NumberExpression = rule(
91 () => Number,
92 number => ({
93 type: 'number',
94 value: number.value
95 })
96);
97
98// StringExpression -> String
99const StringExpression = rule(
100 () => String,
101 string => ({
102 type: 'string',
103 value: string.value
104 })
105);
```

And finally, our token rules:

```
1// Tokens
2const Number = token('number');
3const String = token('string');
4const Name = token('name');
5const Equals = token('operator', '=');
6const PStart = token('parenStart');
7const PEnd = token('parenEnd');
8const BStart = token('codeBlockStart');
9const BEnd = token('codeBlockEnd');
10const Comma = token('comma');
11const Eol = token('endOfLine');
12const IfKeyword = token('keyword', 'if');
13const And = token('operator', '&&');
14const Or = token('operator', '||');
15const DoubleEquals = token('operator', '==');
16const NotEquals = token('operator', '!=');
17const Less = token('operator', '<');
18const Greater = token('operator', '>');
19const LessEquals = token('operator', '<=');
20const GreaterEquals = token('operator', '>=');
21const Add = token('operator', '+');
22const Subtract = token('operator', '-');
23const Multiply = token('operator', '*');
24const Divide = token('operator', '/');
25const Not = token('operator', '!');
```

Now, we are ready to parse! Let's modify our `index.js`

a bit to add the parsing functionality:

```
1// Code which we want to parse
2const code = `i = 5;`;
3
4// Import the lexer
5const analyseCode = require('./lexer-analyser');
6
7// Run the lexer
8const tokens = analyseCode(code);
9
10// We include the the grammar here.
11// Grammar exports the very first rule: LineStatement
12// That means that parseGrammar is actually same as LineStatement constant.
13const parseGrammar = require('./grammar');
14const TokenReader = require('./token-reader');
15
16// Create a reader for our tokens.
17const reader = new TokenReader(tokens);
18
19const statements = [];
20
21while (reader.hasNext()) {
22 // We parse grammar until we have a next token.
23 const statement = parseGrammar(reader);
24
25 if (statement) {
26 // Our statement was parsed successfully,
27 // so we add it to the list of statements.
28 statements.push(statement);
29 continue;
30 }
31
32 // Something went wrong here, we couldn't parse our statement here
33 // so our language needs to throw a syntax error.
34 let token = reader.hasNext() ? reader.get() : reader.getLastToken();
35 throw new Error(`Syntax error on ${token.line}:${token.character} for "${token.value}". Expected an assignment, function call or an if statement.`);
36}
37
38// Finally we output the statements we parsed.
39console.dir(statements, { depth: null }); // We set depth: null so that we can get a nice nested output.
```

You can also check out this part in the branch `part2-chapter2`

in the repository.

So for our code `i = 5;`

we will get this:

```
1[
2 {
3 type: 'assignment',
4 name: 'i',
5 expression: {
6 type: 'unary',
7 withNot: false,
8 value: { type: 'number', value: '5' }
9 }
10 }
11]
```

Let's test an `if`

statement code:

```
if (a > 5) {
a = 5;
}
```

We get:

```
1[
2 {
3 type: 'if',
4 check: {
5 type: 'operation',
6 operation: '>',
7 left: {
8 type: 'unary',
9 withNot: false,
10 value: { type: 'variable', name: 'a' }
11 },
12 right: {
13 type: 'unary',
14 withNot: false,
15 value: { type: 'number', value: '5' }
16 }
17 },
18 statements: [
19 {
20 type: 'assignment',
21 name: 'a',
22 expression: {
23 type: 'unary',
24 withNot: false,
25 value: { type: 'number', value: '5' }
26 }
27 }
28 ]
29 }
30]
```

And for a function call `print('hello' + ' ' + 'world');`

we get:

```
1[
2 {
3 type: 'function',
4 name: 'print',
5 parameters: [
6 {
7 type: 'operation',
8 operation: '+',
9 left: {
10 type: 'operation',
11 operation: '+',
12 left: {
13 type: 'unary',
14 withNot: false,
15 value: { type: 'string', value: 'hello' }
16 },
17 right: {
18 type: 'unary',
19 withNot: false,
20 value: { type: 'string', value: ' ' }
21 }
22 },
23 right: {
24 type: 'unary',
25 withNot: false,
26 value: { type: 'string', value: 'world' }
27 }
28 }
29 ]
30 }
31]
```

As you can see, our grammar also transformed and nicely structured our statements making it easy for us to interpret and do something with them. You can play around with if statements and function calls to get a different output.

In the next part, we will cover writing an interpreter which will give our language a function where we can actually instruct it to do some actual work.

See ya until then! :)

## Accelerate Your Career with 2am.tech

Join our team and collaborate with top tech professionals on cutting-edge projects, shaping the future of software development with your creativity and expertise.

Open Positions