This is the first of four scenarios, covered by the post series:
“HTTPS tracing and debugging: A simple way “
Let’s start with one of the simplest case to cover: non-encrypted on-prem to on-prem communication. It is a scenario where we have more flexibility when it comes to understanding the set-up and infrastructure.
In this first example we have a client wanting to consume an HTTP service provided by a local server. Thus, the simplest thing is to use mitmproxy in the standard “proxy” mode. (The same style of proxy that many companies use to allow their employees to access the Internet). In this first exercise, we will become familiar with the mitmproxy operation in its command line mode.
Observation: The purpose of these posts is not to cover how to instantiate or create services, therefore I will only give limited data regarding this and will focus on the operation of the proxy. If you have any doubt, leave it in the comments and it can be used to enrich the Q&A section of the last post of this series.
Consumption of a NodeJS HTTP service from Postman via Proxy
For this first example, I will use an HTTP echo module, which is available as an npm package ready to run on NodeJS. This service simply responds with the same HTTP data as the request. Also, we will start a mitmproxy command line session. I am using a desktop Linux instance, adapt the commands accordingly to you OS. (Tip: if you are using Windows 10, you can run the mitmproxy under the Windows Subsystem for Linux… and get openSUSE)
Please, ensure that you can open and use your local ports 8080 and 8085. Otherwise, change your ports accordingly.
1- NodeJs Service Instance
Download the module and then run the service to attend HTTP calls on port 8085
npm i http-echo-server node ./node_modules/http-echo-server/index.js 8085
1.2.- Proxy instance
Simply run the mitmproxy binary and a very basic UI will be displayed. At this time, a proxy server has been instantiated on port 8080. (You can change the port and add authentication, for this check the documentation with mitmproxy -h)
1.3.- Postman test call
First, it is necessary to tell Postman that we will use a proxy for our connections. This is done in the settings under the tab “proxy”. As we are using a local hosted proxy, the proxy server address is “localhost”, however, if your request comes from a different on-prem server, you will need to change and use your IP or LAN hostname.
We create a POST call with some headers, parameters and a body. The url will be pointing to our echo server on port 8085 (again, adjust your hostname/ip according to your landscape) Don’t forget to put the “http://” at the beginning of the url, otherwise Postman’s proxy configurations will be bypassed. Execute the call and wait the response.
1.4.- Check your proxy UI
When executing, in the proxy UI, we can verify that a call has been registered and executed by the proxy. Also, important information is already provided, such as destination, url, method, parameters, HTTP response code and total time.
Still, we can obtain many more details. Simply press enter to see these details, and scroll with the cursors. Some details we can see are:
- The full content of the request, including those headers automatically added by postman.
- The full content of the response, including the body and headers
- And the time details and connecting data. This view also lets you review possible redirects, certificates and name resolution when applicable.
Please, stop now and think in what just happened:
- An HTTP call was made, using our proxy
- The proxy recorded and forwarded the request to the final server
- The target HTTP server responded to the proxy
- The proxy recorded the response and finally delivers the response to the client
Great, we’ve already done our first http trace in an on-prem to on-prem scenario! Now, let’s see another scenario:
Ariel Bravo Ayala