The other day, my group got a help desk ticket. The interface (that we maintain) between application 'a' and application 'b' stopped working - the data that got to the downstream application was not correct. The application 'a' support team wanted to know what we had changed that broke things, and they wanted it fixed - and fixed NOW. After a few hours of investigation, where we could find NOTHING that had changed on our side, we discovered that application 'a' had been upgraded over the weekend, and their data format had changed.
We get tickets all the time saying "we have messages queued up - why haven't you grabbing them?". again, we investigate, and find that we are up and waiting for them to connect. we contact them, and only THEN do they look at their side and reply "Oh, my interface was down...let me start it", and 200 messages come across in half a second.
Over and over, when there is an interface problem, people always blame us first. I would say 70% of the time, the problem is with the source system. 25% of the time, it is with the destination system. only about 5% of the time is it us, the integration engine/system.
WHY is it so hard for people to accept that it just MIGHT be something on their end? Especially when something on their end changes? We had some folks tell us that they didn't need to test, because nothing was changing. All they were doing was replacing a server, which would have a new IP, and new OS...And they were SHOCKED when things didn't work.
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
In technical field people try to make a big rukus out of 5% of your mistake, I have experienced it too...when you are right then also your superiors prove you wrong.... i guess no team wants to take the responsibility of the mistakes they make,as it might impact their reputation and performance.But i think its a wrong approach, people should be willing to accept their mistakes ,if they want corporslstes to run efficiently