Hi Marouane,
Great question. My personal take on this -- and it is shared by many, but your mileage of course may vary due to particular circumstances you're addressing at a particular moment -- is that Reactor and RSocket provide a set of capabilities that are difficult to match in terms of resilience, flow control, transport independence, full bidirectionality, multiplexing, etc. But it isn't a single solution that will supplant all others, for example:
1. Distributed systems of apps that are scaling adequately already
2. Systems of apps that include blocking APIs that cannot or will not be rewritten, e.g. external systems (especially if all bottlenecks are in those external systems)
That said, reactive streams undeniably do add complexity to an otherwise pretty pedestrian application/group of apps. Tooling has had 25+ years under so-called imperative
Java to develop, mature, and be refined; and while great strides are being made with reactive-supporting tooling, it clearly began later in the Java ecosystem than that, which is another challenge. But the state of tooling is advancing rapidly, as you might imagine, as many lessons can be drawn from earlier (blocking) efforts; it isn't a 100% green field in that regard.
Regarding observability and metrics (and a few other topics) though, it's far easier to build an API and interactions around a non-blocking API and then block than to go the other way, so that's exactly what you see happening with many aspects of the Spring ecosystem, whether internal, partner, or community. To give one small representative example, Spring Boot Actuator interacts with and produces Reactor types (Publishers, both Monos and Fluxes)...and thus of course can be accessed perfectly well from blocking applications expecting Object<T> and Iterable<T>, etc. So metrics, tracing, security, and other concerns accommodate and embrace Publishers and of course the expectation that context must be maintained when all processes don't occur sequentially on a single
thread.
Hope this helps!
Mark