Normally, when you write a program in C, you would compile it to native code directly.
Compiling to bytecode and then compiling the bytecode to native code at runtime, as the JVM does with its JIT compiler, does have some advantages over compiling to native code directly. In principle, the JIT compiler can generate more efficient code than an ahead-of-time compiler, because the JIT compiler knows more about the system that the code is running on than an ahead-of-time compiler can know. A JIT compiler can optimize the code for the specific processor that the program is running on (an ahead-of-time compiler doesn't know in advance what exact processor the user is going to run the program on), and can do optimizations based on profiling information that only exists when the program is run.
There is a project called
LLVM which compiles C, C++ and source code in other programming language to a kind of bytecode, which can be converted to native machine code at runtime, in the same way as the JVM works. Apple is using LLVM on Mac OS X, and it seems to be the next generation of C and C++ compilers. One of the things they can do with this is dynamically compile programs to run on different kinds of processors, for example on the CPU or on the GPU (the processor on the graphics card).
Note however that LLVM bytecode doesn't run on the JVM; the principle is the same, but it really doesn't have anything to do with
Java bytecode.