Protecting the Phoenix: Unveiling Critical Vulnerabilities in Phoenix Contact HMI – Part 2

Protecting the Phoenix: Unveiling Critical Vulnerabilities in Phoenix Contact HMI – Part 2

As introduced in the first part of this series, Nozomi Networks Labs discovered 14 vulnerabilities in the Phoenix Contact Web Panel 6121-WXPS device (firmware version 3.1.7). During our research, we identified that this device is affected by several critical issues that could be exploited by a remote attacker to completely compromise it. In this second blog, we’re going to drill down into the technical details of the process we followed to analyze the HTTPS service and find the most dangerous vulnerability, which allows an attacker to execute OS commands on the underlying Linux operating system without authentication.

Here are the key takeaways from our research: 

  • Leveraging the default SSH credentials provided by the vendor, we were able to access the underlying Linux-based operating system.
  • We discovered that the HTTPS web service is a NodeJS based application which has been packed and obfuscated. By simply leveraging publicly available tools, we were able to produce a NodeJS source code nearly identical to the original.
  • After obtaining the source code and statically analyzing it, we were able to identify a critical vulnerability (CVE-2023-3572) which allows an attacker to execute arbitrary commands with root privileges without authentication.

In response to the issues we found, Phoenix Contact produced a new firmware release (v4.0.10) that addresses all the reported vulnerabilities and asserted that these issues affect not only the 6121-WXPS device but the whole WP6000 product family.

Background Information

As described in the first blog, the WP 6121-WXPS is one of the newest web-based HMIs (i.e., Human Machine Interface) produced by Phoenix Contact. Traditionally, HMIs are installed inside industrial control facilities and act as the main visual connection to the monitoring system of an automation solution. After configuring the HMI, the device interacts with the designated monitoring system (i.e., a local or remote web service) leveraging its embedded web browser which then renders the output through the display.

Figure 1. Homepage of the Phoenix Contact HTTPS management service.

Via the physical ethernet interface located on the device’s bottom-side, the operator can reach the internal device’s web service used for configuring all its functionalities, such as:

  • Network configuration: used for assigning and managing the device IP address
  • Web application: used for configuring the monitoring system webpage
  • Operation management: all the functionalities related to the device’s administration (e.g., VNC and Remmina server, SSL/TLS certificates, etc.)
Figure 2. Phoenix Contact WP 6121-WXPS interfaces in the backplane.

Information Gathering

After reading the documentation provided by Phoenix Contact, we found default credentials for both unprivileged (i.e., browser) and the administrative account (i.e., root). Leveraging this knowledge, we accessed the local Linux shell through the SSH connection exposed by default from the device and started investigating the details of the device’s network services.

Figure 3. Network services exposed by the device.

We identified that the device uses a nginx server to handle the HTTPS traffic coming to the device’s ethernet interface. For this reason, we inspected the nginx configuration at /etc/nginx/nginx.conf:

Figure 4. Partial content of the nginx configuration file.

As we can see from the location / { ... } block of the configuration file, as soon as an HTTP request comes to the nginx, it then forwards the request to another server running on the same machine at port TCP/8080 (i.e., http://127.0.0.1:8080). We discovered that this application is the /opt/cockpit/cockpit binary.

Figure 5. Details about the network service exposed on the TCP/8080.

Reverse Engineering - Unpacking

The cockpit application is a stripped ELF binary:

$ file opt/cockpit/cockpit

cockpit: ELF 64-bit LSB executable, ARM aarch64, version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID[sha1]=6ac5477cb8643828b93162bdd8c6d60fc3dde1af, for GNU/Linux 3.7.0, stripped

Leveraging the strings in the binary, we discovered that it contains many snippets of JavaScript code. Notably, at the end of the binary, the following strings can be found:

Figure 6. Strings located at the end of the cockpit binary file.

These patterns in the strings may be found inside executables that have been packed using popular NodeJS bundlers, such as the vercel/pkg utility. Reading the “vercel/pkg” documentation we discovered that using a default package configuration file (i.e., package.json) this utility has the following behavior (reference here):

  • It compiles the application JS code with the V8 compiler, and it produces the JS bytecode (i.e., snapshot)
  • It stores the JS bytecode, the original JS code and any other static assets inside a virtual file system
  • It finally packs this file system along with a bundled Node.js engine to the final executable

When the NodeJS application is executed, it essentially boots up this bundled Node.js runtime and runs the JS bytecode stored in the virtual file system.

We decided to do some experiments with the "vercel/pkg" utility to better understand it. For this reason, we set up a simple NodeJS application with the following default configuration file:

$ mkdir test

$ cd test && npm init -y

$ npm install --save-dev pkg

# Adding "build" property so that the package is built for MacOS ARM64:

$ cat package.json

{

 "name": "test",

 "version": "1.0.0",

 "description": "",

 "main": "app.js",

 "bin": "app.js",

 "scripts": {

   "test": "echo \"Error: no test specified\" && exit 1",

   "build": "pkg . --target node14-macos-arm64 --debug"

 },

 "keywords": [],

 "author": "",

 "license": "ISC",

 "devDependencies": {

   "pkg": "^5.8.1"

 }

}

After running the npm run build command we got the following behavior:

Figure 7. Building the testing NodeJS application.

As we can see from these findings, it seems that the "vercel/pkg" utility included by default both V8 bytecode files and the original JavaScript source code:

Figure 8. Debugging messages produced by the “pkg” utility.

To confirm this behavior, we used the pkg-unpacker tool on the final application produced by the "vercel/pkg" utility and we successfully retrieved the original JS source code:

Figure 9. Extracting original JS code.

After some research, we found this explanation: “By default, pkg will check the license of each package and make sure that stuff that isn't meant for the public will only be included as bytecode.”

This portion of our research reveals that if the license property is set to the default "ISC" value or the “private” property is not set to “true”, the "vercel/pkg" utility includes by default both V8 bytecodes and the original JS code. After deleting the license property from the package.json file we get the following debugging messages:

Figure 10. Debugging messages produced by “pkg” after deleting the “license” property from package.json file.

As expected, by running the pkg-unpacker tool on the application built without the "license" property, we were able to extract only the V8 bytecode and not also the original JS code:

Figure 11. The extracted file embedded in the NodeJS application is a V8 bytecode file.

We executed the pkg-unpacker tool on the cockpit NodeJS application (i.e., the application which implements the HTTPS server exposed by the Phoenix Contact device) and were able to extract the embedded JavaScript source code.

Figure 12. Unpacking of the Phoenix Contact NodeJS application (i.e., cockpit).
Figure 13. (Partial) JavaScript code of the NodeJS application.

As we can see from its package.json file, since the cockpit application was compiled with the default "license" property set to "ISC", vercel/pkg has written both JS bytecodes and the raw JS codes inside the virtual file system of the NodeJS application. As previously confirmed, this behavior allows automatic tools, such as the pkg-unpacker, to extract the embedded JS code.

Figure 14. Cockpit package configuration file (i.e., package.json).

Reverse engineering - Deobfuscation

After extracting all JavaScript source code files embedded inside the cockpit application, we found that they were obfuscated to prevent an outsider from reading them and retrieving sensitive information.

Figure 15. The router.js file is obfuscated and minified.

To subvert this protection mechanism, we started evaluating the package.json file used to store metadata about the project. As we can see in the following screenshot, it’s now clear that before building and packing the final executable with “vercel/pkg”, the JS code has been obfuscated with the popular javascript-obfuscator tool.

Figure 16. Cockpit package configuration file (i.e., package.json).

Thanks to the JS De-obfuscator utility, we were able to deobfuscate all JS code:

Figure 17. Deobfuscated JS code.

Vulnerability Research: CVE-2023-3572

After deobfuscating all JavaScript codes embedded into the NodeJS application, we statically analyzed it and located that the JS “posttime” function is invoked to handle all HTTP POST requests of the "/api/tmd" API exposed by the NodeJS application.

As shown in the following evidence, the JavaScript function concatenates all parameters received into the HTTP body and uses them as arguments for "timedatectl". This command is then executed on the underlying Linux OS through the standard NodeJS child_process.execSync() function so that the date is finally set on the WP 6121-WXPS device.

Figure 18. Vulnerable JS code located at controllers/regionalcontroller.js

After reading the NodeJS child_process.execSync() documentation, we found that the behavior of this function is to spawn an OS shell and then execute the provided command through it.

The string passed to the execSync function is processed directly by the shell and special characters (vary based on shell) need to be dealt with accordingly. Since the final command is computed at run-time and then passed to the underlying Linux shell to be executed, this condition allows an attacker to easily trigger an OS command injection.

Leveraging this knowledge, we were able to craft a malicious HTTP POST request where the "min" (i.e., minutes) value contains a Bash subshell command: because the cockpit process is executed with root privileges, as soon as it receives this HTTP POST request, our malicious payload is executed on the target Linux OS with root privileges.

As shown in the following screenshot, by exploiting this software defect, we successfully gained an administrative shell on the target device. It’s important to note that, since this HTTP API is not authenticated, an attacker with a network visibility over the web application can easily gain administrative privileges without authentication.

Figure 19. Execution of arbitrary command on the target Linux OS.

Conclusion

In this second blog regarding the Phoenix Contact WP 6121-WXPS, we described the methodology and the steps that allowed us to gain administrative privileges on it. To achieve this goal, it was necessary to reverse engineer the NodeJS server (by unpacking and deobfuscating it). Finally, after finding a software defect in the “/api/tm” API, we were able to exploit the “OS Command Injection” vulnerability and achieve arbitrary code execution on the underlying Linux OS with root privileges.

In part 3, we'll drill down into the process we used to analyze and exploit all the vulnerabilities affecting the SNMP protocol, and specifically how an attacker could chain these issues to get an administrative shell without authentication. Stay tuned!