Está en la página 1de 22

Compilers

A compiler is a program that reads a program in one language, the source language and translates into an equivalent program in another language, the target language. The translation process should also report the presence of errors in the source program. Source Program

Compiler

Target Program

Error Messages There are two parts of compilation. The analysis part breaks up the source program into constant piece and creates an intermediate representation of the source program. The synthesis part constructs the desired target program from the intermediate representation.

Phases of Compiler
The compiler has a number of phases plus symbol table manager and an error handler. Input Source Program

Lexical Analyzer

Syntax Analyzer


Symbol Table Manager Semantic Analyzer Error Handler

Intermediate Code Generator

Code Optimizer

Code Generator

Out Target Program The cousins of the compiler are Preprocessor. Assembler. Loader and Link-editor. Front End vs Back End of a Compilers. The phases of a compiler are collected into front end and back end. The front end includes all analysis phases end the intermediate code generator. The back end includes the code optimization phase and final code generation phase. The front end analyzes the source program and produces intermediate code while the back end synthesizes the target program from the intermediate code. A naive approach (front force) to that front end might run the phases serially. Lexical analyzer takes the source program as an input and produces a long string of

tokens. Syntax Analyzer takes an out of lexical analyzer and produces a large tree. Semantic analyzer takes the output of syntax analyzer and produces another tree. Similarly, intermediate code generator takes a tree as an input produced by semantic analyzer and produces intermediate code. Minus Points Requires enormous amount of space to store tokens and trees. Very slow since each phase would have to input and output to and from temporary disk use syntax directed translation to inter leaves the actions of phases. Compiler construction tools.

Remedy

Parser Generators: The specification of input based on regular expression. The organization is based on finite automation. Scanner Generator: The specification of input based on regular expression. The organization is based on finite automation. Syntax-Directed Translation: It walks the parse tee and as a result generate intermediate code. Automatic Code Generators: Translates intermediate rampage into machine language. Data-Flow Engines: It does code optimization using data-flow analysis.

Syntax Definition
A contex free grammar, CFG, (synonyms: Backus-Naur Firm of BNF) is a common notation for specifying the syntax of a languages For example, an "IF-ELSE" statement in c-language has the form IF (Expr) stmt ELSE stmt In other words, it is the concatenation of: the keyword IF ; an opening parenthesis ( ;

an expression Expr ; a closing parenthesis ) ; a statement stmt ; a keyword ELSE ; Finally, another statement stmt.

The syntax of an 'IF-ELSE' statement can be specified by the following 'production rule' in the CFG. stmt IF (Expr) stmt ELSE stmt The arrow ( ) is read as "can have the form". A context-free grammar (CFG) has four components: A set of tokens called terminals. A set of variable called nonterminals. A set of production rules. A designation of one of the nonterminals as the start symbol. Multiple production with the same nonterminal on the left like: list + digit list - digit list may be grouped together separated by vertical bars, like: list list + digit | list - digit | digit

Ambiguity
A grammar is ambiguous if two or more different parse trees can be desire the same token string. Equivalently, an ambiguous grammar allows two different derivations for a token string. Grammar for complier should be unambiguous since different parse trees will give a token string different meaning. Consider the following grammar string string + string | string - string |0|2|...|9 To show that a grammar is ambiguous all we need to find a "single" stringthat has more than one perse tree. Figure:23 --- pg.31 Above figure show two different parse trees for the token string 9 - 5 + 2 that corresponds

to two different way of parenthesizing the expression: ( - 5) + 2 and 9 -(5 + 2). The first parenthesization evaluates to 2. Perhaps, the most famous example of ambiguity in a programming language is the dangling 'ELSE'. Consider the grammar G with the production: S IF b THEN S ELSE S | IF b THEN S |a G is ambiguous since the sentence IF b THEN IF b THEN a ELSE a has two different parse trees or derivation trees. Parse tree I figure This parse tree imposes the interpretation IF b THEN (IF b THEN a ) ELSE a Parse Tree II Figure This parse tree imposes the interpretation IF b THEN (IF b THEN a ELSE a) The reason that the grammar G is ambiguous is that an 'ELSE' can be associated with two different THENs. For this reason, programming languages which allows both IF-THENELSE and IF-THEN constant can be ambiguous

Associativity of Operators
If operand has operators on both side then by connection, operand should be associated with the operator on the left. In most programming languages arithmetic operators like addition, subtraction, multiplication, and division are left associative. Token string: 9 - 5 + 2 Production rules list list - digit | digit digit 0 | 1 | 2 | . . . | 9

Parse tree for left-associative operator is

figure 24 on pg. 31 In the C programming language the assignment operator, =, is right associative. That is, token string a = b = c should be treated as a = (b = c). Token string: a = b =c. Production rules: right letter = right | letter letter a | b | . . . | z Parse tree for right-associative operator is:

Figure

Precedence of Operators
An expression 9 + 5 * 2 has two possible interpretation: (9 + 5) * 2 and 9 + (5 * L) The associativity of '+' and '*' do not resolve this ambiguity. For this reason, we need to know the relative precedence of operators. The convention is to give multiplication and division higher precedence than addition and subtraction. Only when we have the operations of equal precedence, we apply the rules of associative. So, in the example expression: 9 + 5 * 2. We perform operation of higher precedence i.e., * before operations of lower precedence i.e., +. Therefore, the correct interpretation is 9 + (5 *).

Separate Rule
Consider the following grammar and language again. S IF b THEN S ELSE S | IF b THEN S |a An ambiguity can be removed if we arbitrary decide that an ELSE should be attached to the last preceding THEN, like: Figure We can revise the grammar to have two nonterminals S1 and S2. We insist that S2 generates IF-THEN-ELSE, while S1 is free to generate either kind of statements. The rules of the new grammar are:

S1 IF b THEN S1 | IF b THEN S2 THEN S1 |a S2 IF b THEN S2 ELSE S2 |a Although there is no general algorithm that can be used to determine if a given grammar is ambiguous, it is certainly possible to isolate rules which leads to ambiguity or ambiguous grammar. A grammar containing the productions. A AA | Alpha is ambiguous because the substring AAA has different parse tree. Figure This ambiguity disappears if we use the productions A AB | B B or A BA | B B Syntax of Expressions A grammar of arithmetic expressions looks like: Expr expr + term | expr - term | term term term * factor | term/factor | factor factor id | num | (expr) That is, expr is a string of terms separated by '+' and '-'. A term is a string of factors separated by '*' and '/' and a factor is a single operand or an expression wrapped inside of parenthesis.

Syntax-Directed Translation
Modern compilers use syntax-directed translation to interleaves the actions of the compiler phases. The syntax analyzer directs the whole process during the parsing of the source code. Calls the lexical analyzer whenever syntax analyzer wants another token. Perform the actions of semantic analyzer. Perform the actions of the intermediate code generator.

The actions of the semantic analyzer and the intermediate code generator require the

passage of information up and/or down the parse tree. We think of this information as attributes attached to the nodes of the parse tree and the parser moving this information between parent nodes and children nodes as it performs the productions of the grammar. Postfix Notation Postfix notation also called reverse polish notation or RPN places each binary arithmetic operator after its two operands instead of between them. Infix Expression : (9 - 5) +2 = (95 -) +2 = (95-) 2 + = 95 - 2 + : Postfix Notation

Infix Expression

: 9 - (5 + 2) =9(52+) =9 (52+) =952 +: Postfix Notation

Why postfix notation? There are two reasons There is only one interpretation We do not need parenthesis to disambignate the grammar. A syntax-directed definition uses a CFG to specify the syntatic structure of the input. A syntax-directed definition associates a set of attributes with each grammar symbol. A syntax-directed definition associates a set of semantic rules with each production rule.

Syntax-Directed Definitions

For example, let the grammar contains the production: XYZ And also let that nodes X, Y and Z have associated attributes X.a, Y.a and Z.a respectively. The annotated parse tree looks like: diagram If the semantic rule {X.a := Y.a + Z.a} is associated with the production XYZ then parser should add the attribute 'a' of node Y and attribute 'a' of node Z together and set the attribute 'a' of node X to their sum. Synthesized Attributes An attribute is synthesized if its value at a parent node can be determined from attributes of its children. diagram Since in this example, the value of a node X can be determined from 'a' attribute of Y and Z nodes attribute 'a' in a synthesized attribute. Synthesized attributes can be evaluated by a single bottom-up traversal of the parse tree. Example 2.6: Following figure shows the syntax-directed definition of an infix-topostfix translator. Figure 2.5 Pg. 34 PRODUCTION expr expr1 + term expr expr1 term expr term term 0 term 1 : : term 9 SEMANTIC RULE expr.t : = expr1.t + | | term.t | | '+' expr.t : = expr1.t + | | term.t | | '-' expr.t : = term.t term.t : = '0' term.t : = '1' : : term.t : = '9'

Parse tree corresponds to Productions

Diagram Annotated parse tree corresponds to semantic rules. Diagram The above annotated parse tree shows how the input infix expression 9 - 5 + 2 is translated to the prefix expression 95 - 2 + at the root. Depth-First Traversals A depth-first traversal of a parse tree is one way of evaluating attributes. Note that a syntax-directed definition does not impose any particular order as long as order computes attribute of parent after all its children's attributes. PROCEDURE visit (n: node) BEGIN FOR each child m of n, from left to right Do visist (m); Evaluate semantic rules at node n END Diagram Translation Schemes A translation scheme is another way of specifying a syntax-directed translation. This scheme is a CFG in which program fragments called semantic actions are embedded within the right sides of productions. For example, rest + term {primt ( ' + ' )} rest, indicates that a '+' sign should be printed between: Ex. 2.8 REVISION: SYNTAX-DIRECTED TRANSLATION Step1: Syntax-directed definition for translating infix expression to postfix form. PRODUCTION expr expr1 + term expr expr1 term expr term term 0 term 1 SEMANTIC RULE expr.t : = expr1.t + | | term.t | | '+' expr.t : = expr1.t + | | term.t | | '-' expr.t : = term.t term.t : = '0' term.t : = '1' depth-first traversal of the term node, and depth first traversal of the rest, node.

Diagram

: : term 9 Figure 2.15 on pg. 39 expr expr + term expr expr term expr term term 0 term 1 : : term 9

: : term.t : = '9'

Step 2: A translation scheme derived from syntax-direction definition is : {print( ' + ' )} {print( ' - ')}

{print( ' 0 ' )} {print( ' 1 ' )} : : {print( ' 9 ' )}

Step 3: A parse tree with actions translating 9 - 5 + 2 into 95 - 2 + Figure 2.14 on pg. 40 Note that it is not necessary to actually construct the parse tree.

Parsing
The parsing is a process of finding a parse tree for a string of tokens. Equivalently, it is a process of determining whether a string of tokens can be generated by a grammar. The worst-case time pf parsing algorithms are O(nn3) but typical is : O(n) time. For example. The production rules of grammar G is: list list + digit | list - digit | digit digit 0 | 1 | . . . | 9 Given token string is 9-5+2.

Parse tree is: diagram Each node in the parse tree is labeled by a grammar symbol. the interior node corresponds to the left side of the production. the children of the interior node corresponds to the right side of production. The language defined by a grammar is the set of all token strings can be derived from its start symbol. The language defined by the grammar: list list + digit | list - digit | digit digit 0 | 1 | 2 | . . . | 9 contains all lists of digits separated by plus and minus signs. The Epsilon, E, on the right side of the production denotes the empty string. As we have mentioned above, the parsing is the process of determining if a string of tokens can be generated by a grammar. A parser must be capable of constructing the tree, or else the translation cannot be guaranteed correct. For any language that can be described by CFG, the parsing requires O(n3) time to parse string of n token. However, most programming languages are so simple that a parser requires just O(n) time with a single left-to-right scan over the iput string of n tokens. There are two types of Parsing Top-down Parsing (start from start symbol and derive string) A Top-down parser builds a parse tree by starting at the root and working down towards the leaves. Easy to generate by hand. Examples are : Recursive-descent, Predictive.

Bottom-up Parsing (start from string and reduce to start symbol) A bottom-up parser builds a parser tree by starting at the leaves and working up towards the root. Not easy to handle by hands, usually compiler-generating software generate bottom up parser But handles larger class of grammar Example is LR parser.

Top-Down Parsing
Consider the CFG with productions: expr term rest rest + term rest | - term rest term 0 | 1 | . . . | 9

Step 0: Initialization: Root must be starting symbol Step 1: expr term rest Step 2: term 9 Step 3 rest term rest Step 4: term 5 Step 5: rest term rest Step 6: term 2 Step 7: rest E In the example above, the grammar made it easy for the top-down parser to pick the correct production in each step. This is not true in general, see example of dangling 'else'.

Predictive Parsing
Recursive-descent parsing is a top-down method of syntax analysis that executes a set of recursive procedure to process the input. A procedure is associated with each nonterminal of a grammar. A predictive parsing is a special form of recursive-descent parsing, in which the current input token unambiguously determines the production to be applied at each step. Let the grammar be: expr term rest rest + term rest | - term rest | 6 term 0 | 1 | . . . | 9 In a recursive-descent parsing, we write code for each nonterminal of a grammar. In the case of above grammar, we should have three procedure, correspond to nonterminals expr, rest, and term. Since there is only one production for nonterminal expr, the procedure expr is: expr ( )

{ term ( ); rest ( ); return } Since there are three (3) productions for rest, procedure rest uses a global variable, 'lookahead', to select the correct production or simply selects "no action" i.e., E - production, indicating that lookahead variable is neither + nor rest ( ) { IF (lookahead = = '+') { match ( ' + ' ); term ( ); rest ( ); return } ELSE IF ( lookahead = = '-') { match (' - '); term ( ); rest ( ); return { ELSE { return; } } The procedure term checks whether global variable lookahead is a digit. term ( ) { IF ( isdigit (lookahead)) { match (lookahead); return; } else{ ReportError ( ); } After loading first input token into variable 'lookahead' predictive parser is stared by calling starting symbol, 'expr'. If the input is error free, the parser conducts a depth-first traversal of the parse tree and return to caller routine through expr. Problem with Predictive Parsing: left recursion

Left Recursion

The production is left-recursive if the leftmost symbol on the right side is the same as the non terminal on the left side. For example, expr expr + term. If one were to code this production in a recursive-descent parser, the parser would go in an infinite loop. diagram We can eliminate the left-recursion by introducing new nonterminals and new productions rules. For example, the left-recursive grammar is: EE+T|T ET*F|F F (E) | id. We can redefine E and T without left-recursion as: E TE` E` + TE` | E T FT` T * FT` | E F (E) | id Getting rid of such immediate left recursion is not enough. One must get rid of indirect left recursion too, where two or more nonterminals are mutually left-recursive.

Lexical Analyzer
The main task of lexical Analyzer is to read a stream of characters as an input and produce a sequence of tokens such as names, keywords, punctuation marks etc.. for syntax analyzer. It discards the white spaces and comments between the tokens and also keep track of line numbers. <fig: 3.1 pp. 84> Tokens, Patterns, Lexemes Specification of Tokens Regular Expressions Notational Shorthand Nondeterministic Finite Automata (NFA). Deterministic Finite Automata (DFA).

Finite Automata

Conversion of an NFA into a DFA. From a Regular Expression to an NFA.

Tokens, Patterns, Lexemes Token


A lexical token is a sequence of characters that can be treated as a unit in the grammar of the programming languages. Example of tokens: Type token (id, num, real, . . . ) Punctuation tokens (IF, void, return, . . . ) Alphabetic tokens (keywords) Comments, preprocessor directive, macros, blanks, tabs, newline, . . .

Example of non-tokens:

Patterns
There is a set of strings in the input for which the same token is produced as output. This set of strings is described by a rule called a pattern associated with the token. Regular expressions are an important notation for specifying patterns. For example, the pattern for the Pascal identifier token, id, is: id letter (letter | digit)*.

Lexeme
A lexeme is a sequence of characters in the source program that is matched by the pattern for a token. For example, the pattern for the RELOP token contains six lexemes ( =, < >, <, < =, >, >=) so the lexical analyzer should return a RELOP token to parser whenever it sees any one of the six.

3.3 Specification of Tokens


An alphabet or a character class is a finite set of symbols. Typical examples of symbols are letters and characters. The set {0, 1} is the binary alphabet. ASCII and EBCDIC are two examples of computer alphabets. Strings A string over some alphabet is a finite sequence of symbol taken from that alphabet. For example, banana is a sequence of six symbols (i.e., string of length six) taken from ASCII computer alphabet. The empty string denoted by , we have S S = S. String exponentiation concatenates a string with itself a given number of times: S2 = SS or S.S S3 = SSS or S.S.S S4 = SSSS or S.S.S.S and so on

By definition S0 is an empty string, } and L` = L The kleene closure of language L, denoted by L*, is "zero or more Concatenation of" L. L* = L0 U L` U L2 U L3 . . . U Ln . . . For example, If L = {a, b}, then L* = { is a regular expression that denotes { ) digits) | num digits optimal-fraction optimal-exponent. This regular definition says that An optimal-fraction is either a decimal point followed by one or more digits or it is missing (i.e., an empty string). An optimal-exponent is either an empty string or it is the letter E followed by an ' optimal + or - sign, followed by one or more digits.

Notational Shorthand
The unary postfix operator + means "one of more instances of " (r)+ = rr* The unary postfix operator? means "zero or one instance of" r? = (r | ) Using these shorthand notation, Pascal unsigned number token can be written as: digit 0 | 1 | 2 | . . . | 9 digits digit+ optimal-fraction (. digits)? optimal-exponent (E (+ | -)?digits)? num digits optimal-fraction optimal-exponent

Finite Automata
A recognizer for a language is a program that takes a string x as an input and answers "yes" if x is a sentence of the language and "no" otherwise. One can compile any regular expression into a recognizer by constructing a generalized transition diagram called a finite automation. A finite automation can be deterministic means that more than one transition out of a state may be possible on a same input symbol. Both automata are capable of recognizing what regular expression can denote.

Nondeterministic Finite Automata (NFA)


A nondeterministic finite automation is a mathematical model consists of

a set of states S; a set of input symbol, , called the input symbols alphabet. a transition function move that maps state-symbol pairs to sets of states. a state so called the initial or the start state. a set of states F called the accepting or final state. An NFA can be described by a transition graph (labeled graph) where the nodes are states and the edges shows the transition function. The labeled on each edge is either a symbol in the set of alphabet, , or 's disappear in a cancatenation. FIGURE 3.21 pp. 116 The transition table is:

Deterministic Finite Automata (DFA)


A deterministic finite automation is a special case of a non-deterministic finite automation (NFA) in which no state has an , then regular expression 'a' denotes {a} and the set containing just 'a' symbol. diagram This NFA recognizes {a}. Suppose, s and t are regular expressions denoting L{s} and L(t) respectively, then s/r is a regular expression denoting L(s) L(t) st is a regular expression denoting L(s) L(t) diagram s* is a regular expression denoting L(s)* diagram (s) is a regular expression denoting L(s) and can be used for putting parenthesis around regular expression Example: Use above algorithm, Thompson's construction, to construct NFA for the regular expression r = (a|b)* abb. First constant the parse tree for r = (a|b)* abb. figure figure For r2 - use case 2. For r1 - use case 2.

figure figure figure We have r5 = (a|b)* figure figure We get r7 =(a|b)* a Similarly for r8 and r10 - use case 2 figure figure And get r11 by case 3b figure We have r = (a|b)*abb. and for r7 - use case 3b For r6 - use case 2 For r5 - use case 3c For r3 - use case 3a

Code Generation
Introduction
Phases of typical compiler and position of code generation. <fig: 9.1 - page 513> Since code generation is an "undecidable problem (mathematically speaking), we must be content with heuristic technique that generate "good" code (not necessarily optimal code). Code generation must do following things: Produce correct code make use of machine architecture. run efficiently.

Issues in the Design of Code generator


Code generator concern with: Memory management. Instruction Selection. Register Utilization (Allocation).

Evaluation order. 1. Memory Management Mapping names in the source program to address of data object is cooperating done in pass 1 (Front end) and pass 2 (code generator). Quadruples address Instruction. Local variables (local to functions or procedures ) are stack-allocated in the activation record while global variables are in a static area. 2. Instruction Selection The nature of instruction set of the target machine determines selection. -"Easy" if instruction set is regular that is uniform and complete. Uniform: all triple addresses all stack single addresses. Complete: use all register for any operation. If we don't care about efficiency of target program, instruction selection is straight forward. For example, the address code is: a := b + c d := a + e Inefficient assembly code is: MOV b, R0 R0 b ADD c, R0 R0 c + R0 MOV R0, a a R0 MOV a, R0 R0 a ADD e, R0 R0 e + R0 MOV R0 , d d R0 Here the fourth statement is redundant, and so is the third statement if 'a' is not subsequently used. 3. Register Allocation Register can be accessed faster than memory words. Frequently accessed variables should reside in registers (register allocation). Register assignment is picking a specific register for each such variable. Formally, there are two steps in register allocation: Register allocation (what register?) This is a register selection process in which we select the set of variables that will reside in register. Register assignment (what variable?) Here we pick the register that contain variable. Note that this is a NP-Complete problem. Some of the issues that complicate register allocation (problem).

1. Special use of hardware for example, some instructions require specific register. 2. Convention for Software: For example Register R6 (say) always return address. Register R5 (say) for stack pointer. Similarly, we assigned registers for branch and link, frames, heaps, etc.,

3. Choice of Evaluation order Changing the order of evaluation may produce more efficient code. This is NP-complete problem but we can bypass this hindrance by generating code for quadruples in the order in which they have been produced by intermediate code generator. ADD x, Y, T1 ADD a, b, T2 is legal because X, Y and a, b are different (not dependent).

The Target Machine


Familiarity with the target machine and its instruction set is a prerequisite for designing a good code generator. Typical Architecture Target machine is: Byte addressable (factor of 4). 4 byte per word. 16 to 32 (or n) general purpose register. Two addressable instruction of form: Op source, destination. e.g., move A, B add A, D Typical Architecture: Target machine is : Bit addressing (factor of 1). Word purpose registers. Three address instruction of forms: Op source 1, source 2, destination e.g., ADD A, B, C Byte-addressable memory with 4 bytes per word and n general-purpose registers, R0, R1, . . . , Rn-1. Each integer requires 2 bytes (16-bits). Two address instruction of the form mnemonic source, destination

MODE Absolute register Index Indirect register Indirect Index Literal

FOR ADDRESS M M R c (R) *R *c (R) #c M R c + contents (R) contents (R) contents (c + contents (R) constant c

EXAMPL E ADD R0, R1

ADDEDCOST 1

ADD temp, 0 R1 ADD 1 100(R2), R1 ADD * R2, R1 0

ADD * 1 100(R2), R1 ADD # 3, R1 1

Instruction costs: Each instruction has a cost of 1 plus added costs for the source and destination. => cost of instruction = 1 + cost associated the source and destination address mode. This cost corresponds to the length (in words ) of instruction. Examples Move register to memory R0 M. MOV R0, M cost = 1+1 = 2. Indirect indexed mode: MOV * 4 (R0), M cost = 1 plus indirect index plus instruction word =1+1+1=3 Indexed mode: MOV 4(R0), M cost = 1 + 1 + 1 = 3 Litetral mode: MOV #1, R0 cost = 1 + 1 = 2 Move memory to memory MOV m, m cost = 1 + 1 + 1 = 3

También podría gustarte