• Post Reply Bookmark Topic Watch Topic
  • New Topic

regarding intermediate representation in system programming?  RSS feed

Mandar Khire
Ranch Hand
Posts: 575
Android Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I am studying fundamental component of language processing.
I search regarding this in Google! but i didn't get properly!
Can anybody help me?
I read many pdf books by above link which shows what is intermediate representation! but i didn't understand its advantages!
Is it created by compiler?
IR produces 2 component in toy compiler 1.table of information & 2 An Intermediate code!
& Desirable property of IR should be easy to construct & analyze.
how it is possible by user?

I get 1 diagram(system programming book) which shows source program goes to IR & in IR there is 2 things front end & back end. source program goes through both of them & target program comes out.
I didn't understand, if i write small java program(1+1) in net bean, then it goes to jvm & in jvm many processes occur then output i get 2.
How to compare this example with that diagram?
I think i am totally confuse!
Ernest Friedman-Hill
author and iconoclast
Posts: 24217
Chrome Eclipse IDE Mac OS X
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I don't think you mean "system programming" here; perhaps you mean "programming systems", which means something rather different. "system programming" implies low-level programming, like writing part of an OS. You're really just asking about compiler design, I think.

The idea of using an intermediate representation, and a "front end" and a "back end", when designing compilers, is really pretty simple.

Imagine you've got several compiled programming languages, say C++, Objective-C, and Eiffel (just for the sake of argument.)

Imagine further you've got several different computer architectures, say x86, Itanium, and PowerPC.

If you build separate compilers to compile each language for each architecture, then you have to write nine separate compilers (C++ for x86, C++ for Itanium...)

But imagine that you define an abstract way of describing compiled code. This definition would be general enough to apply to all computer architectures, but not specific to any of them. Call this the "intermediate representation."

Then you can do your compiling in two pieces: 1) compile the programming language into a program in the intermediate representation (this is the "front end"), and 2) generate real machine instructions for one of your architectures from the IR (that's called the "back end."

Now to do the same amount of work, you just have to write three front ends, and three back ends; that's six pieces of software instead of nine. If you had five languages and five architectures, then it'd be ten instead of 25. Furthermore, it's ten completely independent pieces, rather than 25 with a lot of overlap -- i.e., the need to cut-and-paste a lot of code. The advantages should be obvious.

Now the thing is, I had to think hard to come up with three languages that are fully compiled, and three architectures that really need different machine code! This was a lot more important in The Old Days than it is now, frankly.

Note that the JVM, and Microsoft's CLR (Common Language Runtime) are conceptually similar to the IR! Both of them alllow you to compile multiple languages to run on them, and further, both of them can be implemented on many different platforms. They offer the same advantage in that you only have to write the JVM once for each platform, and only have to write a compiler once for any language to target the JVM, and then any JVM language (Java, Scala, Groovy) will run on any platform with a JVM.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!