the subject of garbage collectors and their impact on runtime speed has been debated back and forth for literally decades. there's no clear resolution either one way or the other, but garbage collectors do tend to get better as time goes on.
in theory, collecting your garbage manually the C++ way
might make your code run faster. but be aware of why this is: it's because you're essentially just writing your own garbage collector that runs when you think it needs to run, and as long as you think it needs to run. in return for this fine-grained control over your home-made GC, you take on the burden of making sure it runs enough: that all your garbage really does, eventually, get collected. if you fail, you leak memory.
on top of that, the strategy you advocate — GC'ing every object as soon as it's destroyed — isn't necessarily the best for making your program run fast. what if there's some urgent job your program would be better off seeing to instead of collecting that garbage? might it not be smarter to put off memory management chores until nothing else interesting is happening? how do you reliably do that in a language like C++?
some famous programmers (
Jamie Zawinski comes to mind) have argued that a good garbage collector is a net win
even if it slows you down, because you won't leak memory and you won't have to worry about manually collecting garbage, so you'll have fewer bugs to fix and more time to think about writing your actual program instead of managing memory — and programmer time is more expensive than computer time. i myself tend to fall in this camp.
other people think down the lines you outlined, and insist that if they get enough control over exactly what their code does and exactly when it happens, they can always write better code. this may be true, but if so, why aren't they writing machine code...?