I can give some thoughts, but if you feel that they're really nothing but low-grade garden
fertilizer, I will understand.
Conceptually, I won't fault your idea or approach based on what information you are at liberty to divulge. Practically, however, certain things occur to me.
Within your system as described, if I was to hook remote debugging into an application, it would be better if each copy of the application used the same target port ID. In the case of, say, running multiple Docker containers on the same VM, of course, you have the consideration that one and only one process can own a given VM port, but that just means that
you should be using normal Docker port mapping,. After all, if I have 3 containers presenting webservers on their standard port 8080, then when I deploy them, I'd likelu map the 8080 on the first container to something like 9180, the second to 9280 and the third to 9380. You can, of course, choose any scheme you prefer. It's no big deal then to make the internal debug port always be port 8260 (choosing at random, here), then map to 9196, 9286, and 9386. That means that I have a nice predictable formula to deploy under.
Having said that, however, the whole idea of having 20 or so elastic instances all waiting a breakpoint on a specific statement rings poorly. You're basically just doing a statistical trap and the better place to statistically
test code is
before it gets deployed into a farm. That is, instead of 20 or 100 instances pf the app, hit a single test instance with 20ΒΈ100, even thousands of unit tests before going into production. If these are stateless apps - and as a general rule, elastic systems should be - then one copy is as good as 1000. Only if the copies interact with each other would it be necessary to have more than one. And since having multiple onteracting copies is a whole new level of nightmare, it's one you should be avoiding anyway, if at all possible.
Which brings me to a more controversial matter. I consider Python these days to be my "go-to" language for quick-and-dirty apps. But not for industrial-grade stuff. Granted, there's a lot more time and expense required to design and build a
Java app, but a lot of that expense has to do directly with scalability and reliability (and security). The problem with Python is that it's an interpreted language and one with dynamic typing. That allows you to "Gir 'er Dun!" much faster, but experience has shown me that enterprise-grade apps are mostly independent of language when it comes to the amount of work required to create, maintain, and support them - only what amount of time in which lifecycle stage changes. Interpreted languages have a very short coding cycle, but you pay for it in late-stage debugging. In fact, my classic illustration of why interpreted languages are dangerous is that while a single mis-placed punctuation will often be caught and halt a build in Java, I could sneeze which saving a Python file, injecting significant garbage, and no one might discover the problem until next Leap Year.
It's a serious problem. Many big name Python shops have, in fact, invested significant time and money into developing tools to reduce the exposure to such errors. I don't have that sort of budget or that sort of investment in critical apps in Python, so Java allows me to embarrass myself in the privacy of my test environment rather than in production and on the front page of USA Today.
Not saying you should in any way abandon your efforts. But I'd definitely recommend looking at your pre-production testing practices.