I have a Structured Streaming application running with Kafka on spark 2.3,
The "spark-sql-kafka-0-10_2.11" version is 2.3.0
The application starts to read messages and process them successfully, then after reaching a specific offset (as shown in the exception message), it throws the following exception:
It always fail on the same offset, looks like this is due to a gap in the offset, because I saw in Kafka UI that after offset 665 there is 667 (it skipped 666 for some reason), and the Kafka client in my Structured Streaming application tries to fetch 666 and fails.
After digging inside Spark's code, I see that they did not expect this exception to happen (according to the comment): spark-source-code
So I am wondering, am I doing something wrong? Or is this a bug on the specific version I am using?