If you support software you are probably familiar with this scenario:
A person responsible for system X seeks your help because "System Y is broken because it is reporting problem Z with X". The general premise is that it must be System Y's fault, because X is not reporting any error. You respond "X has problem Z". But they can't find any problem Z, it must be Y. Well it is true, Y is not infallible, it could be reporting incorrectly or not doing the right thing. You check the logs, follow the interactions, check any values/times/events... and come to the conclusion "Y indicates X has problem Z". After explaining your findings, they fix Z in X. I've been on both sides of the equation.
Why do we find it so hard to imagine an error occurring on 'our' end? Well if we could imagine the error, we would already be on the path to fix it. Also sometimes checking the 'other end' quickly reveals something outside your control. The second reason is however a big red warning flag. If 'this end' does not provide enough information to tell you whether it is successfully interacting with some other system, it is time to add that feature. This is the first step to being able to diagnose this problem and avoid getting caught in the same cycle for the next one. Once you have that information, you are one step closer to imagining the real cause.
This topic is closely related to the phenomenon whereby when stuck with a seemingly unsolvable problem, simply telling someone else about it can point out an answer without any input from the other person. Some workplaces find this so compelling they set up a doll or teddy bear that developers have to speak to before they ask any one else. The great thing about computer systems is they can be vastly more informative than teddy bears. If you believe them!