Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -123,3 +123,6 @@ run
!test/ctx_register.js

.egg/

# Benchmark test files
benchmark/stream_download/nginx/50mb_ones.txt
1 change: 1 addition & 0 deletions benchmark/stream_download/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
tmp
43 changes: 43 additions & 0 deletions benchmark/stream_download/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
FROM node:24.12.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Node.js version 24.12.0 specified does not appear to be a valid or current version. The latest Node.js version is 22.x, and the current LTS is 20.x. Using a non-existent version will cause the build to fail. Please use a current stable or LTS version. Using an -alpine image is also recommended for smaller image sizes.

FROM node:20.14.0-alpine


# 安装 nginx 和其他必要工具
RUN apt-get update && apt-get install -y \
nginx \
curl \
vim \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
Comment on lines 4 to 11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To optimize the Docker image size, it's recommended to:

  1. Use --no-install-recommends with apt-get install to avoid installing unnecessary packages.
  2. Remove vim as it's a large dependency and generally not needed in a production or benchmark image. If you need to debug, you can docker exec into a running container and install it manually.
RUN apt-get update && apt-get install -y --no-install-recommends \
    nginx \
    curl \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean


# 创建 nginx 配置目录
RUN mkdir -p /etc/nginx/conf.d

# 复制 nginx 配置文件
COPY nginx.conf /etc/nginx/sites-available/default

# 创建 nginx 工作目录
RUN mkdir -p /var/www/html

# 创建启动脚本
COPY start-nginx.sh /usr/local/bin/start-nginx.sh
RUN chmod +x /usr/local/bin/start-nginx.sh

# 暴露端口
EXPOSE 80 9229

# 设置工作目录
WORKDIR /var/www/html

# 健康检查
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/health || exit 1

RUN mkdir -p /root/workspace

COPY gc.js /root/workspace/gc.js
COPY benchmark.js /root/workspace/benchmark.js
COPY benchmark_undici.js /root/workspace/benchmark_undici.js

RUN cd /root/workspace && npm i urllib --registry https://registry.npmmirror.com
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Installing npm packages directly with npm i inside the Dockerfile is not ideal for dependency management. It's better practice to add urllib to a dependencies section in your package.json, copy package.json (and package-lock.json) into the image, and then run npm install or npm ci. This makes your dependencies explicit and leverages Docker's layer caching more effectively. I've added a separate comment on package.json with a suggestion. With that change, this line should be updated to use npm install.

RUN cd /root/workspace && npm install --registry https://registry.npmmirror.com

Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The npm package installation uses a hardcoded Chinese mirror registry. For a project that may be used internationally, consider using the default npm registry or making the registry configurable. If the Chinese mirror is required for specific performance reasons, consider adding a comment explaining why.

Copilot uses AI. Check for mistakes.

# 启动命令
CMD ["/usr/local/bin/start-nginx.sh"]
51 changes: 51 additions & 0 deletions benchmark/stream_download/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Nginx 下载/上传测试服务器

## 快速开始

> **注意**: 请先切换到 `benchmark/stream_download` 目录下执行以下命令
Comment on lines +1 to +5
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark script is located in a directory called 'stream_download', but it performs both download AND upload operations. The directory name doesn't accurately reflect its purpose. Consider renaming to 'stream_benchmark' or 'stream_upload_download' to better represent the full scope of functionality.

Suggested change
# Nginx 下载/上传测试服务器
## 快速开始
> **注意**: 请先切换到 `benchmark/stream_download` 目录下执行以下命令
# Nginx 下载/上传流式基准测试服务器
## 快速开始
> **注意**: 请先切换到 `benchmark/stream_download` (下载/上传流式基准测试目录)下执行以下命令

Copilot uses AI. Check for mistakes.

### 构建镜像

```bash
docker build --platform linux/amd64 -t nginx-node-benchmark .
```

### 运行容器

```bash
docker run --rm -d --platform linux/amd64 \
--name nginx-node-benchmark \
-p 8080:80 \
-v $(pwd)/nginx:/var/www/html \
nginx-node-benchmark
```

### 测试

```bash
# 下载测试
curl -v http://localhost:8080/download/test-file.txt

# 上传测试
curl -v -X POST -d "test" http://localhost:8080/upload/
```

### 停止

```bash
docker stop nginx-node-benchmark && docker rm nginx-node-benchmark
```

### 运行生成大文件

```bash
sh generate_50mb_file.sh
```

### 运行 node 测试

```bash
docker exec -ti nginx-node-benchmark bash
cd /root/workspace
node benchmark.js
```
60 changes: 60 additions & 0 deletions benchmark/stream_download/benchmark.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
const urllib = require('urllib');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for package.json files and check for urllib dependency
find . -name 'package.json' -type f | head -20

Repository: cnpm/cnpmcore

Length of output: 420


🏁 Script executed:

#!/bin/bash
# Check the main package.json for urllib
cat ./package.json

Repository: cnpm/cnpmcore

Length of output: 5286


🏁 Script executed:

#!/bin/bash
# Also search for urllib references in package.json files
rg -l 'package\.json' -g '**' | head -5
rg 'urllib' package.json -A 2 -B 2

Repository: cnpm/cnpmcore

Length of output: 301


🏁 Script executed:

#!/bin/bash
# Check the benchmark package.json
cat ./benchmark/stream_download/package.json

Repository: cnpm/cnpmcore

Length of output: 81


🏁 Script executed:

#!/bin/bash
# Also verify the require statement in the benchmark file
cat ./benchmark/stream_download/benchmark.js | head -20

Repository: cnpm/cnpmcore

Length of output: 646


🏁 Script executed:

#!/bin/bash
# Check if urllib is used anywhere else in the project and what versions might be expected
rg 'urllib' --type ts --type js -i | grep -v node_modules | head -20

Repository: cnpm/cnpmcore

Length of output: 328


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation or comments about urllib in the project
rg 'urllib' -B 2 -A 2 --type ts --type js --type json | grep -v node_modules | head -40

Repository: cnpm/cnpmcore

Length of output: 938


🏁 Script executed:

#!/bin/bash
# Check if urllib is listed as a transitive dependency in the main package.json
# by looking for packages that might depend on urllib
rg '"urllib"' node_modules 2>/dev/null | head -5 || echo "node_modules not available"

Repository: cnpm/cnpmcore

Length of output: 39


🏁 Script executed:

#!/bin/bash
# Check if there are any npm scripts or docs about running benchmarks
cat ./package.json | grep -A 20 '"scripts"'

Repository: cnpm/cnpmcore

Length of output: 970


Add urllib to benchmark/stream_download/package.json dependencies.

The urllib package is required by benchmark.js but not declared in the benchmark's package.json. This will cause the benchmark to fail with a module not found error at runtime.

Update benchmark/stream_download/package.json to include urllib as a dependency:

{
  "type": "commonjs",
  "dependencies": {
    "urllib": "^3.0.0"
  }
}
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around line 1, the file requires
"urllib" but benchmark/stream_download/package.json is missing that dependency;
update benchmark/stream_download/package.json to add "urllib" under dependencies
(e.g., "urllib": "^3.0.0") and ensure "type" remains "commonjs", then run npm
install (or update lockfile) so the module is available at runtime.

const fs = require('fs');
const crypto = require('crypto');
const path = require('path');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Convert to ES module imports.

The code uses CommonJS require() statements, which violates the project's coding guidelines mandating ES modules throughout the codebase.

🔎 Proposed fix
-const urllib = require('urllib');
-const fs = require('fs');
-const crypto = require('crypto');
-const path = require('path');
+import urllib from 'urllib';
+import fs from 'fs';
+import crypto from 'crypto';
+import path from 'path';

Note: This requires changing package.json to "type": "module" (see related comment on package.json).

Based on learnings: Use ES modules (import/export) throughout the codebase - no CommonJS require()

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const urllib = require('urllib');
const fs = require('fs');
const crypto = require('crypto');
const path = require('path');
import urllib from 'urllib';
import fs from 'fs';
import crypto from 'crypto';
import path from 'path';
🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark.js around lines 1 to 4, the file uses
CommonJS require() calls; convert them to ES module imports (e.g., replace each
require with an import statement for urllib, fs, crypto, and path) and update
any downstream code that depended on CommonJS semantics if necessary; also
ensure package.json is set to "type": "module" so the ES imports are valid.

const { setTimeout } = require('timers/promises');

require('./gc.js');

const tmp = path.join(__dirname, 'tmp');
fs.mkdirSync(tmp, { recursive: true });

const BASE_URL = process.argv[2] || 'http://127.0.0.1';
let count = 0;
async function downloadAndUpload() {
const tmpFilePath = path.join(tmp, `${crypto.randomUUID()}.txt`);
const downloadResponse = await urllib.request(`${BASE_URL}/download/50mb_ones.txt`, {
writeStream: fs.createWriteStream(tmpFilePath),
});
const uploadResponse = await urllib.request(`${BASE_URL}/upload/`, {
method: 'POST',
stream: fs.createReadStream(tmpFilePath),
});
await fs.promises.rm(tmpFilePath);
count++;
if (count % 100 === 0) {
console.log(
`Downloaded and uploaded ${count} times, downloadResponse: ${downloadResponse.status}, uploadResponse: ${uploadResponse.status}`,
);
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for the download and upload operations. If either request fails, the temporary file may not be cleaned up, leading to disk space issues over time. Consider wrapping the operations in a try-finally block to ensure cleanup happens even on failure.

Suggested change
const downloadResponse = await urllib.request(`${BASE_URL}/download/50mb_ones.txt`, {
writeStream: fs.createWriteStream(tmpFilePath),
});
const uploadResponse = await urllib.request(`${BASE_URL}/upload/`, {
method: 'POST',
stream: fs.createReadStream(tmpFilePath),
});
await fs.promises.rm(tmpFilePath);
count++;
if (count % 100 === 0) {
console.log(
`Downloaded and uploaded ${count} times, downloadResponse: ${downloadResponse.status}, uploadResponse: ${uploadResponse.status}`,
);
let downloadResponse;
let uploadResponse;
try {
downloadResponse = await urllib.request(`${BASE_URL}/download/50mb_ones.txt`, {
writeStream: fs.createWriteStream(tmpFilePath),
});
uploadResponse = await urllib.request(`${BASE_URL}/upload/`, {
method: 'POST',
stream: fs.createReadStream(tmpFilePath),
});
count++;
if (count % 100 === 0) {
console.log(
`Downloaded and uploaded ${count} times, downloadResponse: ${downloadResponse.status}, uploadResponse: ${uploadResponse.status}`,
);
}
} finally {
try {
await fs.promises.rm(tmpFilePath);
} catch {
// Ignore cleanup errors to avoid masking original failure
}

Copilot uses AI. Check for mistakes.
}
}

let downloading = true;
(async () => {
while (true) {
if (downloading) {
await Promise.all([downloadAndUpload(), downloadAndUpload(), downloadAndUpload()]);
} else {
await setTimeout(100);
if (globalThis.gc) {
globalThis.gc();
}
}
}
})();
(async () => {
while (true) {
if (downloading) {
await Promise.all([downloadAndUpload(), downloadAndUpload(), downloadAndUpload()]);
} else {
await setTimeout(100);
if (globalThis.gc) {
globalThis.gc();
}
}
}
})();
Comment on lines +47 to +70
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The infinite while loop with concurrent Promise.all calls could lead to uncontrolled resource consumption. If download/upload operations are slow or hang, this could accumulate many pending operations. Consider adding concurrency limits or waiting for previous operations to complete before starting new ones.

Copilot uses AI. Check for mistakes.

process.on('SIGUSR2', () => {
downloading = !downloading;
});
39 changes: 39 additions & 0 deletions benchmark/stream_download/benchmark_undici.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
const { request } = require('undici');
const fs = require('fs');
const crypto = require('crypto');
const path = require('path');
const { pipeline } = require('stream/promises');

async function downloadAndUpload() {
const tmpFilePath = path.join(__dirname, `${crypto.randomUUID()}.txt`);

// Download file
const downloadResponse = await request('http://127.0.0.1/download/50mb_ones.txt');
await pipeline(downloadResponse.body, fs.createWriteStream(tmpFilePath));

// Upload file
await request('http://127.0.0.1/upload/', {
method: 'POST',
body: fs.createReadStream(tmpFilePath),
});

await fs.promises.rm(tmpFilePath);
Comment on lines +10 to +20
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for the download and upload operations. If either request fails, the temporary file may not be cleaned up, leading to disk space issues over time. Consider wrapping the operations in a try-finally block to ensure cleanup happens even on failure.

Suggested change
// Download file
const downloadResponse = await request('http://127.0.0.1/download/50mb_ones.txt');
await pipeline(downloadResponse.body, fs.createWriteStream(tmpFilePath));
// Upload file
await request('http://127.0.0.1/upload/', {
method: 'POST',
body: fs.createReadStream(tmpFilePath),
});
await fs.promises.rm(tmpFilePath);
try {
// Download file
const downloadResponse = await request('http://127.0.0.1/download/50mb_ones.txt');
await pipeline(downloadResponse.body, fs.createWriteStream(tmpFilePath));
// Upload file
await request('http://127.0.0.1/upload/', {
method: 'POST',
body: fs.createReadStream(tmpFilePath),
});
} finally {
try {
await fs.promises.rm(tmpFilePath, { force: true });
} catch {
// Ignore cleanup errors to avoid masking original failure
}
}

Copilot uses AI. Check for mistakes.
}

let downloading = true;
(async () => {
while (true) {
if (downloading) {
await downloadAndUpload();
} else {
await setTimeout(100);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing import for setTimeout.

Line 29 uses setTimeout(100) as a promise, but setTimeout is not imported from 'timers/promises'. This will cause a runtime error because the global setTimeout returns a timer ID, not a promise.

🔎 Proposed fix
 const fs = require('fs');
 const crypto = require('crypto');
 const path = require('path');
+const { setTimeout } = require('timers/promises');
 const { pipeline } = require('stream/promises');

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In benchmark/stream_download/benchmark_undici.js around line 29, the code calls
await setTimeout(100) but never imports the promise-based setTimeout, so the
global setTimeout returns a timer id and causes a runtime error; fix by adding
an import for the promise-based setTimeout from 'timers/promises' (e.g., import
{ setTimeout } from "timers/promises") at the top of the file and keep using
await setTimeout(100) as intended.

if (globalThis.gc) {
globalThis.gc();
}
}
}
})();
Comment on lines +24 to +35
Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The infinite while loop with no error handling could cause the benchmark to run indefinitely even if operations are failing. If download/upload operations hang, this could accumulate many pending operations. Consider adding error handling and potentially a mechanism to detect and report failures.

Copilot uses AI. Check for mistakes.

process.on('SIGUSR2', () => {
downloading = !downloading;
});
16 changes: 16 additions & 0 deletions benchmark/stream_download/curl-format.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
----------\n
size_download: %{size_download}\n
speed_download: %{speed_download}\n
size_request: %{size_request}\n
speed_upload: %{speed_upload}\n
content_type: %{content_type}\n
num_connects: %{num_connects}\n
num_redirects: %{num_redirects}
23 changes: 23 additions & 0 deletions benchmark/stream_download/docker-compose.nginx.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
version: '3.8'

services:
nginx-benchmark:
image: nginx:alpine
container_name: nginx-benchmark-server
ports:
- '8080:80'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./nginx:/usr/share/nginx/html:ro
Comment on lines +9 to +11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the root directory configured in nginx.conf
rg -n 'root\s+' benchmark/stream_download/nginx.conf

Repository: cnpm/cnpmcore

Length of output: 122


🏁 Script executed:

#!/bin/bash
# View the context around the root directives
echo "=== Context around line 6 ==="
sed -n '1,15p' benchmark/stream_download/nginx.conf

echo ""
echo "=== Context around line 76 ==="
sed -n '70,82p' benchmark/stream_download/nginx.conf

Repository: cnpm/cnpmcore

Length of output: 551


Fix the server-level root directive in nginx.conf to match the mounted volume path.

The default root /var/www/html/; at the server level (line 6) does not match the mounted volume path /usr/share/nginx/html. The container will fail to serve content because /var/www/html is not mounted in docker-compose. The root /usr/share/nginx/html; directive at line 76 only applies to the error page location block and won't cover general requests.

Change line 6 to root /usr/share/nginx/html/; to align with the docker-compose volume mount.

🤖 Prompt for AI Agents
In benchmark/stream_download/docker-compose.nginx.yml around lines 9 to 11, the
nginx server-level root in the mounted nginx.conf points to /var/www/html which
doesn't match the docker-compose volume mount /usr/share/nginx/html; update the
server-level root directive in nginx.conf (line 6) to read root
/usr/share/nginx/html/ so general requests use the mounted content (leave the
existing error-page-specific root intact).

Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The volume mount path in docker-compose.nginx.yml uses '/usr/share/nginx/html' (line 11), which is the standard nginx path, but the nginx.conf file references '/var/www/html/' as the root directory (line 6 in nginx.conf) and the Dockerfile creates '/var/www/html' as the working directory. This path inconsistency will cause the nginx service to fail to serve files correctly.

Copilot uses AI. Check for mistakes.
restart: unless-stopped

# 可选:使用 openresty 支持 Lua 模块
# openresty-benchmark:
# image: openresty/openresty:alpine
# container_name: openresty-benchmark-server
# ports:
# - "8080:80"
# volumes:
# - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
# - ./nginx:/usr/share/nginx/html:ro
# restart: unless-stopped
79 changes: 79 additions & 0 deletions benchmark/stream_download/gc.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
const { PerformanceObserver, constants } = require('node:perf_hooks');

const gcStats = {
totalGCDuration: 0, // ms
count: 0,
byKind: {
scavenge: 0, // minor GC
markSweepCompact: 0, // major GC
incremental: 0,
weakc: 0,
unknown: 0,
},
};

// kind meaning: https://nodejs.org/api/perf_hooks.html#performancegc_kind
// 1: scavenge
// 2: mark-sweep-compact
// 4: incremental
// 8: weak callbacks
function kindToString(kind) {
switch (kind) {
case constants.NODE_PERFORMANCE_GC_MAJOR:
return 'markSweepCompact';
case constants.NODE_PERFORMANCE_GC_MINOR:
return 'scavenge';
case constants.NODE_PERFORMANCE_GC_INCREMENTAL:
return 'incremental';
case constants.NODE_PERFORMANCE_GC_WEAKCB:
return 'weakc';
default:
return 'unknown';
}
}

const obs = new PerformanceObserver((list) => {
const entries = list.getEntries();
for (const entry of entries) {
gcStats.totalGCDuration += entry.duration;
gcStats.count += 1;

const kindCode = entry.detail?.kind;
const kind = kindToString(kindCode);
if (!gcStats.byKind[kind]) gcStats.byKind[kind] = 0;
gcStats.byKind[kind] += entry.duration;
}
});

obs.observe({ entryTypes: ['gc'] });

// for other modules to use
function getGCStats() {
return {
totalGCDuration: gcStats.totalGCDuration,
count: gcStats.count,
avgDuration: gcStats.count ? gcStats.totalGCDuration / gcStats.count : 0,
byKind: { ...gcStats.byKind },
};
}

// only print GC stats if the GC environment variable is set
if (process.env.GC || true) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove constant condition to make environment check meaningful.

The condition if (process.env.GC || true) always evaluates to true, making the environment variable check ineffective. The logging will run unconditionally regardless of the GC environment variable setting.

🔎 Proposed fix
-if (process.env.GC || true) {
+if (process.env.GC) {
   setInterval(() => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (process.env.GC || true) {
if (process.env.GC) {
setInterval(() => {
🧰 Tools
🪛 Biome (2.1.2)

[error] 61-61: Unexpected constant condition.

(lint/correctness/noConstantCondition)

🤖 Prompt for AI Agents
In benchmark/stream_download/gc.js around line 61, the condition `if
(process.env.GC || true)` always evaluates to true; remove the constant `||
true` so the branch actually depends on the environment variable (e.g., use `if
(process.env.GC)` or a stricter check like `if (process.env.GC === '1' ||
process.env.GC === 'true')`), and keep the existing logging/behavior inside the
block unchanged so it only runs when GC is enabled via env.

Copy link

Copilot AI Dec 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition process.env.GC || true will always evaluate to true, making the environment variable check ineffective. This means GC stats will always be printed regardless of the GC environment variable setting. Consider changing to just process.env.GC if you want to make it conditional, or remove the condition entirely if you always want logging.

Suggested change
if (process.env.GC || true) {
if (process.env.GC) {

Copilot uses AI. Check for mistakes.
setInterval(() => {
const stats = getGCStats();
console.log('');
console.log(
'[GC]',
'total(ms)=',
stats.totalGCDuration.toFixed(2),
'count=',
stats.count,
'avg(ms)=',
stats.avgDuration.toFixed(2),
'byKind=',
stats.byKind,
);
// process memory usage
console.log('process memory usage=', process.memoryUsage());
}, 2000);
}
30 changes: 30 additions & 0 deletions benchmark/stream_download/generate_50mb_file.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/bin/bash

# 生成一个50MB的txt文件,内容都是1
# 文件名为: 50mb_ones.txt

OUTPUT_FILE="nginx/50mb_ones.txt"
TARGET_SIZE_MB=50
TARGET_SIZE_BYTES=$((TARGET_SIZE_MB * 1024 * 1024))

# 检查文件是否已存在
if [ -f "$OUTPUT_FILE" ]; then
echo "文件 $OUTPUT_FILE 已存在,正在删除..."
rm -f "$OUTPUT_FILE"
fi

echo "正在生成 $TARGET_SIZE_MB MB 的文件,内容都是1..."

# 使用dd命令生成文件,每块1KB,共50*1024块
dd if=/dev/zero bs=1024 count=$((TARGET_SIZE_MB * 1024)) | tr '\0' '1' > "$OUTPUT_FILE"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The dd command is functionally correct, but can be made more readable by using 1M for block size and referencing the TARGET_SIZE_MB variable directly.

Suggested change
dd if=/dev/zero bs=1024 count=$((TARGET_SIZE_MB * 1024)) | tr '\0' '1' > "$OUTPUT_FILE"
dd if=/dev/zero bs=1M count=${TARGET_SIZE_MB} | tr '\0' '1' > "$OUTPUT_FILE"


# 验证文件大小
ACTUAL_SIZE=$(stat -f%z "$OUTPUT_FILE" 2>/dev/null || stat -c%s "$OUTPUT_FILE" 2>/dev/null)
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stat command usage at line 22 attempts to use both BSD (-f%z) and GNU (-c%s) syntax with fallback, but the command structure will fail. When the BSD version fails, it will not silently fall back to the GNU version - instead, it will show an error. Consider using a more robust approach by checking the OS type first or using a conditional that properly handles the error without displaying it.

Suggested change
ACTUAL_SIZE=$(stat -f%z "$OUTPUT_FILE" 2>/dev/null || stat -c%s "$OUTPUT_FILE" 2>/dev/null)
ACTUAL_SIZE=$(wc -c < "$OUTPUT_FILE")

Copilot uses AI. Check for mistakes.
if [ "$ACTUAL_SIZE" -eq "$TARGET_SIZE_BYTES" ]; then
echo "成功生成文件: $OUTPUT_FILE"
echo "文件大小: $(ls -lh "$OUTPUT_FILE" | awk '{print $5}')"
else
echo "警告: 文件大小不匹配,期望: $TARGET_SIZE_BYTES 字节,实际: $ACTUAL_SIZE 字节"
fi

echo "文件路径: $(pwd)/$OUTPUT_FILE"
Loading
Loading