There was a bug in the system. It couldn't be reproduced the error on localhost using the testing API connection. I asked the technical manager how I should proceed.
He said use my localhost to connect to the staging API connection since it can be reproduced on the staging server. I also asked about the database. He said just put in one item in our testing database. There is no way to connect to the staging database without a VPN.
I connected to the staging API using the testing database and was able to reproduce the bug. Afterwards, I solved the bug.
A few days later, the customer found that all the products on the staging server disappeared. An investigation found that I has connected to the staging API without their permission and this has caused the products to be temporarily removed from the cache since the database was different. I had no knowledge of the products that could be removed from the cache previously.
Customer threatened to make our company pay for loss of products display on the staging server. The technical manager now says there is no need to connect to the staging API. He was the one who gave the idea of this connection and instructed me to do so. Now he says it is not his fault and is putting the blame on me. I am the one who connected.
I do not know how large our company has to pay. Neither do I know if I can keep my job.
Yeah, I kinda missed that on first reading. If it's just a staging server, I don't see what the crisis is -- that's what staging servers are for. In any case, if you really think someone gonna toss it into the fan: lawyer.
And, no database backup? If it's crucial data (as your PHB* seems to be acting like) then why hasn't he put backup protocols in place? (Yeah, I know, PHB).
Just wondering then what was the "cost" to the customer because of this outage in a staging environment?
Is it your companies server, or the customers server? (you called it "their" server)
Who is ultimately responsible for the staging system? Is there a single person? Did they know you were debugging on it?
Whose permission would you normally get to connect to the "staging" API?
Is it one you have access to regularly, or is it normally off limits? How did you get the details to access it?
Any of those questions might help you find a paper trail.
At the end of the day it is still a stuff up - and hopefully one that you (and maybe others on this forum) can learn from.
I'm not entirely certain what the lesson is...
- always cover your butt and get instructions like this in writing? (not one I would really want to take on board, because most of the time it is just a waste of time)
- don't debug in a prelive server ?
- the world is unfair ?
If the data was THAT important that losing it will result in financial loss, then it isn't you that has screwed up, it is someone else that has screwed up for not implementing an efficient backup strategy.
Loosing data in a "staging" or "pre-production" environment is bad if and only if there is no backup. The staging database is different from the live database right?
Regarding the connection in the first place, which computer did you use given your manager verbally approved you to do so? If it is his computer, technically it is your manager's account who connecting.
Of course he can deny and say something like someone hacked into my account/computer and do such and such.
Just curious you mentioned you fixed the bug, has it gone production?
I don't understand how they can make your company pay. There must be written communication somewhere that the server you connected to is intended for acceptance testing. Regardless of who caused the problem, if the loss caused them damages, then it's because of their own negligence. If the server is actually a production server and they didn't communicate this properly, it's their own negligence.
A "dutch baby" is not a baby. But this tiny ad is baby sized:
Devious Experiments for a Truly Passive Greenhouse!