From the point of application, 100% CPU utilization is good and desired.
If application has a work to do then I expect it to use 100% CPU to complete the work as fast as possible. This is even more important when you have multiple CPUs and threads. It is best if application loads ALL CPUs 100% to complete work as fast as possible. If application has work but doesn't use CPU 100% then there are other bottlenecks (IO, bad synchronization, etc).
E.g. you have a web application running on a server with 2 CPUs. You load-test it with "ab -c 4 -n 1000
http://localhost:8080/myapp/" (4 concurent users, 1000 requests). If your application is implemented correctly it will use 100% of 2 CPUs and process n requests/second. Other application may only use 50% of 2 CPUs and process less requests/second. The second application is implemented wrongly because it doesn't use all available resources.
Now, what profiler helps you to do, is to optimize application that it may deliver even more requests per second. Of course, if you have constant number of requests per second, then you will lower CPU usage.
Regarding the CPU load on production systems. We try to keep CPU load below 50% on each server in a 2-server-cluster. In this case one server may still process all load if another goes down.