Recently we had a good (rather hard) experience with consuming a legacy application service in OSB. The legacy application exposes a MQ based interface. It receives the request message in one queue and reads the reply to queue manager and reply to queue names from the message header and then puts the response in that specified queue manager/queue which is on their side.
If you have worked with MQs, setting up local queues pointing them to remote queues with correct queue manager names etc. can be quite tricky. We wasted many weeks to debug an issue where the remote response queue manager host name was not correct, coordinating with another team, doing it across 2 different environments was a big pain.
More painful was the request/response never worked for months, after quite a bit of heartburn, It was found that while we were setting the reply to response queue manager/queue names in the request message header, it was getting overwritten by OSB with the local queue manger/queue names.
It turned out if you are using mq-transport in request-response mode (which is the synchronous mode) - the response queue manager/queue name has to be same as the request queue manager/queue name if not OSB will overwrite it. This however is not a problem if the message pattern is one-way (i.e only request).
please see more details about 'reply to queue manager' and 'reply to queue' transport header properties
here
So to solve the problem we introduced an intermediate queue in between our request queue and remote request queue. The first BS (business service) uses the mq transport in request/response mode and pushes the request to an intermediate queue. Another proxy service listening on to this intermediate queue picks up the message and pushes it to the real request queue with reply to queue manager/queue names in the transport headers as required by the legacy application.
There were some apprehension about correlation id, message id etc, however this worked well, and we were able to move forward.