Regarding flashback: I've no idea how widespread it's usage is, neither how to find that out. In my opinion, good DBAs should be aware at least of its existence, as it can be immensely useful in solving certain kinds of emergency situations.
Flashback data archive has very low impact on the database. Data for the archive are generated from UNDO tablespace, that is not as part of the transaction that modified the data. (This means the transaction generally proceeds as fast as if there is no data archive for the table -- now compare that to the trigger-based solution...) Unless some database resource is fully utilized, there should be virtually no impact at all in using the flashback data archive.
I'd say that your requirement (that is, recovering mistakenly deleted data) has only two solutions - the flashback functionality, or a backup strategy that would keep data for a defined period (eg. last 90 days) and allow to recover database or tablespace to any time in the protected period. (This is a troublesome process that might require some downtime or another database machine to restore the backups to, and a competent DBA, so flashback archive is much, much more flexible. But such solution is probably in place anyway to protect against hardware failures, so you don't incur any additional overhead - such as disc space - at all.) There might be some other solutions I don't know about, I'm just a developer after all
The other solution you mentioned - mirroring - does not generally protect against these kind of accidents, as the mirrored databases are generally kept up to date with the master one. Any data changes - even the bad ones - are immediately distributed to mirrors, so you cannot use the mirrors to get the data before they were overwritten or deleted.
I have just one more thought on "users deleting data on purpose": this is a troublesome scenario. Generally, you should only let trusted users to your system. The less you can trust a user, the more you need to restrict his privileges. You also need to keep in mind that hostile operations do not include deletions or overwrites only, also insert can introduce "bad data" (think about user entering a fake invoice into an accounting system).
Anyway, if you want to be able to repair bad data, you also need to somehow detect them. You might have a review system that would only allow verified data in (the reviewer might be a person, or an automated process, of course). In this case you might design the system in such a way that data pending review would be kept in some sort of queues and would get inserted into live tables only after the review was done, efficiently removing the need to recover mistakenly modified data. Of course, solution like these would probably be quite expensive.
On the other hand, if such a review process is not in place (or if it fails in some instances), the errors introduced into your database will probably be discovered at random, when someone gets to recheck the data after obtaining suspicious results of some kind. Now, just repairing the data might not be enough. There could be reports already generated from the wrong data, and important decisions taken based on these misleading reports. What to do now? In my opinion, just putting a tool to repair the bad data, without thinking out profoundly the consequences of such situations is not enough.