To focus on the database insert question, I'd maybe take two approaches. For humans who expect a quick response start with every update request doing whatever inserts are involved and a commit. That's simple and probably meets user expectations about transactions. For the files coming in from outside I'd start with the same approach, but you'd have the option to batch them up. If you can afford some latency in reading the files you can throttle the number of threads that process files. Be sure to prove you have a problem before solving it with complex batching.
Will you have connection pooling? That will probably have ways to throttle the number that actually run at the same time. If requests come in too fast to handle they'll queue up and eventually time out. If that's not acceptable you may have to beef up the hardware to handle more and more concurrent transactions.
Concurrent users != concurrent requests. You'll observe some ratio of "think time" to "server busy" time per user. With 1000 users I surely wouldn't expect 1000 concurrent requests. See if you can guess the ratio and stress
test early and often. My team stresses almost every build. Amazingly simple changes to SQL and indexes can make worlds of difference and you want to find those as soon as possible.