Brian Overland

author
+ Follow
since Sep 09, 2011
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
3
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Brian Overland

A couple of things:

(1) Once again, the thread's original question was why this didn't work:

main; {
}

The single biggest error here is that the semi-colon must not come right after the word "main". As to the other issues...



(2) I have read and re-read the C++ spec carefully on the subject of "int main" extra-carefully. Everyone is correct that "void main" is simply not standard -- it is not supported by the spec -- but what the specification does say is this: The following two forms, and only these forms, are standard, in that every C++ compiler must support these two:

int main()

int main(int argc, char *argv[])

It then says that handling of other return types is compiler-defined -- that is, the implementation may CHOOSE to accept other types. (And yes, as has been pointed out, this is dangerous and risky, even if so many compilers accept "void main".) The question here is not one of what might seem reasonable, but rather what the spec says. The C++ specification further states that if the main function reaches the end without a return statement, the compiler is to interpret the function as if it ended with:

return 0;

In other words, returning 0 is the default behavior. What is misleading here is that if you don't read the C++ spec, you might infer that "void" function behavior is supported for main, especially as quite a few compilers do accept "void main," not flagging it as any kind of error. What these compilers are really doing is permitting the declaration and then interpreting it as if it were declared "int main". main may therefore behave like a void function even though it really has int return type! Confused yet?

The moral, as has been pointed out, is that "void main" is useless and unnecessarily risky as someday you may try porting your code to a compiler that doesn't allow "void main" and flags it as an error. In summary, here is probably what the original questioner intended:

int main {
;
}

Note the semi-colon denotes an empty statement and must come WITHIN a function, never outside. That is very, very basic, of course. And yeah, everyone should just always declare main as "int" to be safe. It turns out that there is nothing to be gained from ever using "void main", even though it happens to be accepted by a lot of compilers.
12 years ago
I agree with the last post. The combination of so much legacy code -- and the inability in some cases to move to an embedded system (due to issues of speed, size, efficiency) -- mean that C++ is not going away.

-- BrianO.
12 years ago
This is more a subject for a Visual Basic BB, David, but I personally find that Visual Basic is outstanding for personal productivity software I write for my own use and for others. Visual Basic is not intended to produce commercial software you buy off the shelf. It is great for the recreational, hobbyist, and part-time programmer who wants to get things working fast in the Windows environment.

If you want to write professional applications to be sold to a mass market, then yes, I suggest you learn C++.

Maybe I over stepped by saying that VB is the BEST of its kind. Ok, that's a matter of opinion, and to be honest, I haven't checked every development environment in the entire world.

My point was -- use the right tool for the right job, that's all. You may have your own favorites, that's fine.

Brian Overland
12 years ago
And so the example I gave was that Bill Gates did programming that was probably way beyond what any compiler of the time could do.

But yes, things have progressed a bit since then. <grin>

At Microsoft, it has long since been an internal policy that a developer (i.e. a writer of production code) has to make a clear case before writing any function in assembly language -- the presumption is that the optimizing compiler can usually do better!

-- Brian Overland
12 years ago
No offense intended, but the question "Would assembly code always be more efficient?" made me smile because it strikes me like asking, "Can a computer beat a human at chess?" The answer is probably, yes, but it does depend to some extent on both the human and the computer.

You could, in theory, do any trick with assembly code or machine code that even the best optimizing compilier could do -- in theory. The real question is, would it be worth all the extra work of applying all those tricks yourself? And would you be smart enough to see all the optimizing opportunities yourself? And: is any human that smart?

A fascinating case in point (tho I'm sure there are others) is Bill Gates back in the late 1970s or was it early 80s, writing BASICA for the first personal computers and miraculously fitting it all into 64K -- including enough space for user program/data area. He had to program down to the hardware, pulling off every optimizing trick he could think of. He couldn't possibly have created BASICA in 64K without writing it at the assembly-code-machine-code level. For one thing, a HLL would've generated far too much overhead for him to fit the code into that tiny space.

But yeah... since then, the cases in which it has been necessary to write anything in assembly code have become rarer and rarer.
12 years ago
Oh yeah -- it's t here in the book, so my advice is, buy the book!

To be honest, I must give you a longer answer here. I cover most of the C++0x spec, but it is very, very expensive... also, large parts of it were difficult to test, difficult to acquire a compiler for, and also about things still being changed.

What is most important depends on what you use it for. To me some of the most important new features of the spec are:

-- "long long int" type (64 bits usually), though that's long been supported by some compilers

-- ranged-based "for" which works like "for each" in Basic; it removes need to check boundry conditions yourself, and so is much less error prone, as well as convenient

-- subclasses inherit base-class constructors... a potentially huge change for object oriented programming, saving much work, potentially

-- strongly-typed enumerations... so that the "enum" keyword can be used to creating even stronger type declarations

-- user-defined constants... essentially you can define new "literal" formats for the compiler to recognize. This is a further step in enabling you to truly use classes like they were primitive types. You can make a literal like "3i" into an instance of an Imaginary number class, for example, so that "5+3i" denotes a complex number. (Cool.)

-- more consistent initialization rules

-- smart pointers, which implement garbage collection for you

These are just some of the features I felt really important.

I am sorry I did not get to lambda functions and multi-threading... but these I felt were outside the scope of the book, which was already getting too large. Lambda functions are extremely difficult to explain, by the way... it's like you're creating a function definition "on the fly" without ever calling it in the normal way. Personally, I find it incredibly difficult to explain why this is useful, although there are some very advanced programmers who want it.

== Brian Overland
12 years ago
Anthony AJ is completely right and has basically said it all.

I will add that C was designed from the start to be as platform independent as possible, which was a somewhat paradoxical goal given that it also enabled people to write "closer to the hardware." So, depending on how you wrote your code, you could write your programs to be as platform indendent... or platform DEPENDENT... as you wanted.

And C++ inherits most of C's traits, particularly with regard to data types...

Certain things in C (still in C++) are potential land mines. Particularly bad is the fact that "int" type is usually 16 bits wide on 16-bit systems, while it is 32 bits wide on 32-bit systems. This means that code running perfrectly well on 32-bit systems can "break" badly after being recompiled for 16 bit systems!

Someday we will even have 64-bit systems, and then the "int" type will be promoted to what is currently available as the "long long int" type -- usually 64 bits.

Consequently, you might want to avoid "int" type altogether and stick to "short" "long" and "long long int"... but even with those, be careful, because C++ spec does not absoltuely guarantee specific sizes. Oops! (Top secret advice: if you want to do what Microsoft and other companies do, define types such as "INT32" and "INT16" in header files, which you then carefully maintain for different platforms.... then use INT16, INT32, and INT64 as your primitive types. Avoid the standard types. That's if you want to be REALLY careful.)


My strong advice to you -- if you cannot avoid platform specific code -- is to "modularize" your program as much as possible (object orientation can sometimes help there, by the way) so that all the platform specific stuff (your I/O functions for example) is handled by just a few functions or classes. Then, write the rest of your program to be as platform independent as possible, so that you don't need to rewrite everything when you compile for a new system. Keep everything platform-dependent in just one module which you can rewrite as you need to.


Of course, you absolutely have to recompile for each new environment or platform! Each platform will have its own compiler or compilers created for it. In each case, the compiler's function is to translate the (relatively) more generic C++ code into machine code for that will run on that particular platform... and by "platform," remember, I refer to a particular processor type (thus different machine code), system architecture, and operating system.

Hope this helps,


== Brian Overland
12 years ago
Sorry for the late reply on this.

Answers: this is for both people completely new to programming as well as people who have already done some programming.

For a number of reasons -- such as trying to make a book of this scope platform independent -- all the examples are for simple console I/O. It is not specific to Windows except in one way. To prevent the DOS window from going away too quickly I use

system("PAUSE");

For Mac environment, you may need to replace this with

cin.get();

Good luck with your projects,

== Brian Overland
12 years ago
Hi... Raja has already given you a great answer. Bravo, Raja!

I will answer the question about STL though... the STL capabiloites are very impressive, but they are sophisticated but generic (platform independent) data-structure and analysis capabilities. For example, you get stacks, vectors, sophisticated list structures, iterators (to go through the lists), automatically sorted lists, data dictionaries, and so on. Not to mention simplified strings that have all the benefits and ease-of-use of Visual Basic strings.

But what STL does not have are features that apply to specific platforms so far as I know. I suspect many of the STL features are useful in creating a database from the ground up (for example), but they are of less help in interacting with a specific, existing database engine like SQL or Access.

So, I must refer you back to the eariler answer from Raja. Great question, though...

-- Brian Overland
12 years ago
I can't say for certain, but I would venture an educated guess... it is likely that they use C and/or C++, because these are more likely to be used in big commercial applications that can't run with the overhead or restrictions of, say, Visual Basic or LISP.

The most efficient code of all of course would be written in assembly language or machine code. But those languages require ten times the man-hours to do the same things you'd do in C++. It's not worth it to spend those extra man hours.

I can't say for sure that Google couldn't be written C# or Java. I'm not sure. But C++ allows you to do pointer arithmetic and lower-level operations when you need to. Of course, you need to know what you're doing, because there is less protection against bad coding. But this is exactly why the "Big Boys" use it.

Different situations require different tools. Visual Basic has long been the best fast prototyping tool. C/C++ is now considered the best "close to the hardware" program, so the largest commercial applications tend to be written it.

That's all by way of speculation, but still....

Brian Overland
12 years ago
Oh wait a moment... there is no companion CD. There was one for first edition only (you must have got to that site) not the second edition.

Reason for no companion CD is that all the code is online, following the link I mentioned earlier. Some people still use CDs, but most people download from the Internet these days.


Brian Overland
12 years ago
Sorry for the long wait for response on this one. All you need to do is go to the books' website, mentioned on page xxvii in the Preface...

www.informit.com/title9780132673266

From there, there is a button that you can click to go to the software download.

I apologize the URL has all those strange numbers in it -- not my doing.

== Brian Overland
12 years ago
Brian Overland here. My two cents:

For certain kinds of advanced programmers, AJ is right, that multi thread and concurrent processing will become more and more important.

BUT for beginner or intermediate programmers, bear in mind that mutli threading is an advanced subject. You won't need it to write simple applications... in fact you can write rather sophsticated programs without it.

HOWEVER, I do agree it will become more and more important in the future. I would recommend: Buy Anthony AJ Williams' book. At least after you've mastered the basics of C++.

-- Brian O.
12 years ago
I would say to Joseph, hey, maybe C++ isn't so tedious if you start with one of MY books... namely the most recent, C++ Without ?Fear, 2nd Edition. (Yes, I know, a shameless plug.)

One problem though is that even I haven't written about C in years. Other than the object-oriented and other extensions, C and C++ are so close in 99% of the ways, but that 1% difference might trip you up if you start by learning C++. There are some things C lets you do that C++ doesn't, and vice-versa. (All the class and template stuff, of course, is new in C++ and not supported in C.) Overloading is unique to C++ as opposed to C.

The K&R book, The C Programming Language, still does the job of teaching C succinctly and intelligently, but it moves very, very fast, and it assumes you understand all the concepts of programming -- including what an address is and what it is for.

I would recommend one of my old books, C In Plain English... unfortunately, it hasn't been in print for awhile, I think. But if you can find a copy on eBay.

Best of luck,

Brian Overland
12 years ago
Brian Overland here. A brief comment this time. I haven't had the pleasure of receiving a copy of Anthony AJ Williams book yet, but I would have to say: if you are interested in the topic of multi-threading, I would heartily recommend his book, as he clearly seems to know what he's talking about. It is not an easy subject and requires specialized expertise.

AJ, I will look for your book and try to get it myself! Best wishes,

Brian Overland
12 years ago