Edge computing devices like NVIDIA Jetson boards (Nano, Orin, Thor) have ARM-based CPUs with different performance characteristics than x86 desktop processors. Before you can optimize edge workloads, you must understand how to measure CPU performance.

You will:

  • Use jtop to observe CPU metrics interactively.
  • Collect CPU utilization time series data.
  • Run a CPU-bound workload and measure its impact.
  • Analyze CPU metrics to identify potential bottlenecks.
  • Document baseline CPU performance for your Jetson device.

These skills are required for identifying whether performance issues in edge deployments are CPU-bound or GPU-bound.


Objective and Expected Learning Outcomes

By completing this assignment, you will be able to:

  • Use jtop to monitor CPU metrics on NVIDIA Jetson devices.
  • Collect CPU utilization time series data programmatically.
  • Distinguish between CPU utilization percentage and system load average.
  • Identify CPU bottlenecks in edge workloads.
  • Establish baseline CPU performance metrics for your specific Jetson platform.
  • Interpret per-core CPU utilization in multi-core ARM processors.

Edge Devices

This lab must be completed on an NVIDIA Jetson device. Your results will vary based on the number of CPU cores and clock speed of your specific device.


What You Are Given

Your repository includes:

  • scripts/monitorCPU.py (starter code with TODOs).
  • scripts/stressCPU.py (CPU workload generator, complete, no edits needed).
  • requirements.txt.
  • reflection.txt (reflection template).

Rules

  • Follow course guidelines about working on the development branch!
  • Do all work on your Jetson device (Nano, Orin, Thor, etc.).
  • You must commit logs/CPUMetrics.csv.
  • You must commit logs/CPUMetrics.png (visualization).
  • Do not commit .venv.
  • Ensure your code runs on Jetson with ARM architecture.

Prerequisites

Your instructor has pre-installed the required system tools (jtop) [You may get a warning; you can ignore]. You will set up your Python environment.

Note: All commands in this lab explicitly use python3 to avoid ambiguity between Python versions.


Step 1: Verify System Tools

jtop --version

If this fails, contact your instructor.


Step 2: Create Python Virtual Environment

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Step 3: Verify Python Packages

python3 -c "from jtop import jtop; print('jtop OK')"
python3 -c "import matplotlib; print('matplotlib OK')"

Part A: Observe CPU Metrics Interactively

Step A1: Identify Your Platform

In jtop, note:

  • Platform name (for example, Jetson AGX Orin).
  • Number of CPU cores.
  • CPU architecture (ARM64).

Record these in reflection.txt.


Step A2: Observe Idle CPU

While jtop is running with no other workloads:

  • Observe idle CPU utilization per core.
  • Typically low (often under 10 to 20 percent), but it depends on background services.
  • Observe system load average.
  • Interpret relative to core count (for example, on a 4-core system, a load of about 4 indicates full utilization).

Make note of these values in reflection.txt.


Step A3: Create a CPU Workload

Open a second terminal on the remote Jetson device and run:

python3 scripts/stressCPU.py --duration 30 --threads 4

Important behavior of stressCPU.py:

  • --threads controls how many worker processes are created.
  • The total amount of work is fixed and calibrated.
  • Increasing --threads spreads the same work across more workers.
  • Per-core utilization may decrease while total CPU work remains similar.
  • Each worker performs slightly different randomized work, similar to real applications.

Observe in jtop:

  • How many CPU cores are active?
  • Whether some cores are near 100 percent.
  • Whether CPU activity is sustained.
  • Whether temperature increases.

This demonstrates sustained CPU-bound workloads on Jetson devices.


Part B: Complete the CPU Monitoring Script

Open scripts/monitorCPU.py. There are three TODOs.

TODO 1: Collect CPU Metrics

In collectCPUMetrics, extract CPU data from jtop using the provided helper:

CPUCores = getCPUCores(jetson.stats)
avgCPU = sum(CPUCores) / len(CPUCores) if CPUCores else 0

If CPU core data is empty, sleep briefly and continue rather than recording a sample.


TODO 2: Store Sample Data

Each sample must include:

sample = {
    'timestamp': timestamp,
    'avgCPU': avgCPU,
    'CPUCores': CPUCores
}
samples.append(sample)

TODO 3: Create Visualization

In plotCPUMetrics, create a CPU utilization plot:

plt.plot(timestamps, avgCPUs, linewidth=2, label='Avg CPU')
plt.axhline(y=meanCPU, linestyle='--', label=f'Mean: {meanCPU:.1f}%')
plt.xlabel('Time (seconds)')
plt.ylabel('CPU Utilization (%)')
plt.title(f'CPU Utilization - {platformName}')
plt.legend()
plt.grid(True, alpha=0.3)
plt.ylim([0, 100])

Part C: Run the Monitoring Script (Idle)

python3 scripts/monitorCPU.py --duration 30 --interval 1.0 --output logs/CPUMetrics.csv

Verify output:

head -n 5 logs/CPUMetrics.csv

Expected format:

timestamp,avgCPU,core0,core1,core2,...
0.0,5.2,4.1,6.3,5.1,...

Check visualization:

ls -lh logs/CPUMetrics.png

Part D: Collect Metrics Under Load

Terminal 1:

python3 scripts/monitorCPU.py --duration 30 --interval 1.0 --output logs/CPUMetricsLoad.csv

Terminal 2 (start within about 5 seconds):

python3 scripts/stressCPU.py --duration 20 --threads 4

Compare idle versus load:

head -n 5 logs/CPUMetrics.csv
head -n 5 logs/CPUMetricsLoad.csv

Part E: Analysis and Reflection

Complete reflection.txt with:

  • Platform name.
  • Number of CPU cores.
  • Idle CPU average.
  • Load CPU average.
  • Observations during stress.
  • Reflection on identifying CPU bottlenecks.

Part F: Submission

Verify Required Files

ls -lh logs/
ls -lh scripts/monitorCPU.py
ls -lh reflection.txt
ls -lh requirements.txt

Required:

  • logs/CPUMetrics.csv
  • logs/CPUMetrics.png
  • scripts/monitorCPU.py
  • reflection.txt
  • logs/CPUMetricsLoad.csv is optional but recommended

On development Branch: Commit and Push

All work for this lab must be committed to your development branch, not directly on main.

  1. Verify you are on the development branch:
git branch

If needed:

git checkout development
  1. Stage your completed files. You may run git add multiple times as you work:
git add scripts/monitorCPU.py
git add reflection.txt
git add logs/CPUMetrics.csv logs/CPUMetrics.png
  1. Commit your changes once the lab is complete:
git commit -m "Complete Lab 1: CPU Baseline Monitoring."
  1. Push your development branch to GitHub:
git push origin development
  1. Open a pull request from development to main on GitHub.
    This pull request is your official submission and must follow the course assignment guidelines.

Do not merge the pull request yourself unless explicitly instructed.


Evaluation Criteria

You will receive full credit if:

  1. scripts/monitorCPU.py runs successfully on a Jetson device.
  2. logs/CPUMetrics.csv contains valid timestamped per-core data.
  3. logs/CPUMetrics.png shows a CPU utilization plot.
  4. reflection.txt is complete and thoughtful.
  5. Code handles a variable number of CPU cores (Nano, Orin, Thor).
  6. .venv is not committed.

Common Issues

“jtop: command not found.”
Contact your instructor. jtop must be installed system-wide.

“ImportError: No module named ‘jtop’”
Ensure your virtual environment is activated and run:

pip install jetson-stats

“Script works on Orin but not Nano.”
Ensure:

  • CPU core count is handled dynamically.
  • CSV headers are generated from detected cores.
  • Empty initial CPU reads are skipped.

Additional Resources

  • jetson-stats Documentation Official documentation for jetson-stats, including the jtop interactive monitor and programmatic access via Python. This is the authoritative reference for collecting CPU, GPU, memory, and power metrics on NVIDIA Jetson devices.
  • NVIDIA Jetson Platform Guide Official NVIDIA documentation describing the hardware specifications and architectural details of all Jetson platforms. This resource is useful for understanding differences in CPU core counts, memory, and performance characteristics across devices.
  • NVIDIA tegrastats Utility Official NVIDIA documentation for the tegrastats command-line utility, which provides low-level, real-time CPU, GPU, memory, and thermal statistics on Jetson devices. This is useful for lightweight monitoring and scripting outside of jtop.
  • Matplotlib Documentation Comprehensive documentation for Matplotlib, the Python plotting library used in this lab to generate CPU utilization visualizations. This is the primary reference for customizing plots, labels, legends, and output formats.
  • Pandas Documentation Documentation for the Pandas data analysis library. While optional for this lab, Pandas is useful for inspecting, filtering, and analyzing CSV-based time series data collected from Jetson devices.