Security teams that deal with connected devices often end up running long firmware scans overnight, checking progress in the morning, and trying to explain to colleagues why a single image consumed a workday of compute time. That routine sets the context for a new research paper that examines how the EMBA firmware analysis tool behaves when it runs in different environments.
The study looks at EMBA deployments on a local standalone system and on a Microsoft Azure virtual machine. It focuses on execution time, repeatability, cost, and operational characteristics that matter to practitioners who rely on firmware analysis as part of regular security work.
What the researchers set out to measure
The authors designed the research around a practical question that often comes up in security teams. They wanted to understand how deployment choices shape the day to day experience of using EMBA for Internet of Things firmware analysis.
The experiments used identical EMBA configurations across environments. The same firmware samples ran on local systems and in the cloud. Multiple EMBA versions were included to capture changes over time in module behavior and execution patterns. Scan duration, system resource use, and the number of findings were recorded for each run.
The study used a small set of firmware images selected for their ability to trigger most EMBA modules. That choice allowed the researchers to observe long running behavior across extraction, static analysis, and dynamic checks.
Execution time and repeatability in practice
One of the main areas the paper explores is execution time. Firmware scans often stretch into many hours, especially for medium and large images. The researchers tracked scan durations down to the second and repeated runs to measure consistency.
Repeated executions on the same platform produced nearly identical run times and findings. That behavior matters for teams that depend on repeatable results during testing, validation, or research work. It also supports the use of EMBA in environments where scans need to be rerun with the same settings over time.
The data also shows that firmware size alone does not explain scan duration. Internal structure, compression, and embedded components influenced how long individual modules ran. Some smaller images triggered lengthy analysis steps, especially during deep inspection stages.
Research co-author Kenan Sansal Nuray, a Research Scientist Assistant at the UTSA Carlos Alvarez College of Business, told Help Net Security that the results point to a need for more deliberate scan planning. He explained that EMBA behavior follows the internal layout of firmware images. For unfamiliar or highly customized firmware, he advised teams to begin with structural reconnaissance such as filesystem identification and unpacking validation. Early structural insight supports informed module selection and helps teams manage analysis time in firmware with unusual layouts.
Cloud deployments through a practitioner lens
The Azure virtual machine setup followed a common pattern used by security teams. The instance matched the local system in core count and memory. Ubuntu was used in line with EMBA guidance.
Cloud execution showed runtime patterns tied to virtualization, disk access, and shared infrastructure. Some firmware scans completed within expected windows, depending on which modules dominated runtime. Certain modules showed long execution periods related to decompilation and pattern matching.
The researchers also tracked cloud costs tied to scan duration. The paper reports several hundred dollars in usage charges for a limited number of scans. That information gives security managers a concrete reference point when planning analysis workloads.
Nuray said cloud based EMBA deployments fit well into large scale scanning activity. He described cloud execution as a practical option for parallel analysis across many firmware images. Local systems, he added, support detailed investigation where teams need tight control over execution conditions and repeatability.
Standalone systems and operational control
Local standalone systems provided a controlled environment. Hardware resources remained consistent across runs, and firmware files stayed on local storage throughout the process. That setup supports repeatable testing and simplified data handling.
The study describes predictable execution behavior during repeated scans. Module timing remained stable across runs, which helps teams plan scan windows and system availability. The paper also notes that a one time hardware investment supports ongoing analysis without usage based billing.
Standalone systems still require maintenance, storage planning, and system updates. The research frames these activities as part of routine operational ownership that security teams already manage for other tooling.
Module level behavior matters
Some modules consumed a significant share of total scan duration across both environments. Decompilation, deep extraction, and text searching contributed heavily to runtime.
Other modules completed quickly and showed similar timing patterns across platforms. The researchers describe this behavior as an interaction between firmware structure and module design. That interaction shapes overall scan behavior and points to opportunities for tuning analysis profiles based on firmware characteristics.
Using the research to plan firmware analysis work
The research positions deployment choice as an operational planning decision tied to workload volume, budget forecasting, and data handling preferences. EMBA produced consistent findings across environments when configuration remained the same. Execution characteristics reflected where and how the tool ran.
Nuray described a hybrid analysis model that combines both environments. In this approach, cloud infrastructure supports initial triage and bulk scanning across large firmware sets. Selected images then move to local systems for deeper validation and follow up analysis. He said this model aligns with established firmware security workflows and supports scale without sacrificing investigative depth.
Security teams running firmware analysis often learn most from the moments when scans stretch longer than planned or results arrive later than expected. Studies like this tend to surface during those moments, when teams start asking questions about where analysis work belongs and how to organize it.
