Skip to content

Conversation

@eakman-datadog
Copy link
Contributor

@eakman-datadog eakman-datadog commented Jan 30, 2026

ref #6546

Unit Tests

SQLite

SQLite unit tests:
➜  juicefs git:(3-batch-clone-sql) ✗ go test -v -run TestSQLiteClient -timeout 10m ./pkg/meta -count=1
=== RUN   TestSQLiteClient
2026/01/30 14:08:03.038447 juicefs[51872] <ERROR>: error: no such table: jfs_node
goroutine 62 [running]:
runtime/debug.Stack()
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/runtime/debug/stack.go:26 +0x64
github.com/juicedata/juicefs/pkg/meta.errno({0x103ae75a0, 0x140008bc240})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/utils.go:153 +0xe4
github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doGetAttr(0x1400076bec0?, {0x103b46e10?, 0x140005ac180?}, 0x1?, 0x1400019aae0?)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql.go:1329 +0x5c
github.com/juicedata/juicefs/pkg/meta.(*baseMeta).GetAttr.func1({0x0?, 0x100648184?})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base.go:1279 +0x40
github.com/juicedata/juicefs/pkg/utils.WithTimeout.func1()
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/utils/utils.go:116 +0x38
created by github.com/juicedata/juicefs/pkg/utils.WithTimeout in goroutine 57
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/utils/utils.go:115 +0xdc [errno@utils.go:153]
2026/01/30 14:08:03.041564 juicefs[51872] <INFO>: Create session 1 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:08:03.055632 juicefs[51872] <INFO>: doFlushquot ino:13, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:15} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:03.055731 juicefs[51872] <INFO>: doFlushquot ino:13, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-8192 newInodes:-20} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:03.157383 juicefs[51872] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:08:03.157616 juicefs[51872] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:03.266827 juicefs[51872] <WARNING>: File name is too long as a trash entry, truncating it: fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff -> 1-22-ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff [trashEntry@base.go:2854]
2026/01/30 14:08:03.272456 juicefs[51872] <INFO>: cleanup trash: deleted 10 files in 1.932625ms [CleanupTrashBefore@base.go:2927]
2026/01/30 14:08:03.937219 juicefs[51872] <INFO>: Quota for inode 13 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:08:05.147969 juicefs[51872] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:08:05.150647 juicefs[51872] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:05.422861 juicefs[51872] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:08:05.423097 juicefs[51872] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:06.753116 juicefs[51872] <ERROR>: error: UNIQUE constraint failed: jfs_chunk_ref.chunkid
goroutine 57 [running]:
runtime/debug.Stack()
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/runtime/debug/stack.go:26 +0x64
github.com/juicedata/juicefs/pkg/meta.errno({0x103ae75a0, 0x140014ca3f0})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/utils.go:153 +0xe4
github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doWrite(0x1400076bec0, {0x103b46e10?, 0x140010ea240?}, 0x1037, 0x1?, 0x0?, {0x0?, 0x1?, 0x0?, 0x2760000?}, ...)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql.go:3082 +0xac
github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Write(0x1400025f908, {0x103b46e10, 0x140010ea240}, 0x1037, 0x0, 0x0, {0x140012821f8?, 0x7d980?, 0x140?, 0x1b03b00?}, ...)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base.go:2002 +0x19c
github.com/juicedata/juicefs/pkg/meta.testCompaction(0x14000411180, {0x103b6a7d0, 0x1400076bec0}, 0x1)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base_test.go:1714 +0x1284
github.com/juicedata/juicefs/pkg/meta.testMeta(0x14000411180, {0x103b6a7d0, 0x1400076bec0})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base_test.go:138 +0x13c
github.com/juicedata/juicefs/pkg/meta.TestSQLiteClient(0x14000411180)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql_test.go:33 +0x180
testing.tRunner(0x14000411180, 0x103ad3810)
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/testing/testing.go:1997 +0x364 [errno@utils.go:153]
2026/01/30 14:08:06.755646 juicefs[51872] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:08:06.755743 juicefs[51872] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:06.757959 juicefs[51872] <INFO>: Create session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:08:06.768643 juicefs[51872] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:08:06.768829 juicefs[51872] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:06.837321 juicefs[51872] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:08:09.838780 juicefs[51872] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:24576 newInodes:6} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:09.839076 juicefs[51872] <INFO>: doFlushquot ino:4469, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:09.839212 juicefs[51872] <INFO>: doFlushquot ino:4472, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:11.886536 juicefs[51872] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-4096 newInodes:-1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:11.906812 juicefs[51872] <INFO>: Quota for inode 4469 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:08:11.906868 juicefs[51872] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:11.906901 juicefs[51872] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:08:11.915341 juicefs[51872] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:11.916111 juicefs[51872] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
=== RUN   TestSQLiteClient/BasicQuotaOperations
2026/01/30 14:08:11.950601 juicefs[51872] <INFO>: Quota for user 0 is deleted [syncQuotaMaps@quota.go:340]
=== RUN   TestSQLiteClient/QuotaFileOperations
2026/01/30 14:08:11.982650 juicefs[51872] <INFO>: flush session 2: [FlushSession@base.go:905]
=== RUN   TestSQLiteClient/QuotaErrorCases
=== RUN   TestSQLiteClient/QuotaConcurrentOperations
=== RUN   TestSQLiteClient/QuotaMixedTypes
=== RUN   TestSQLiteClient/QuotaUsageStatistics
=== RUN   TestSQLiteClient/CheckQuotaFileOwner
2026/01/30 14:08:16.043146 juicefs[51872] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:08:16.043690 juicefs[51872] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
=== RUN   TestSQLiteClient/QuotaEdgeCases
    base_test.go:3786: Testing inodes-only quota limit...
2026/01/30 14:08:16.044229 juicefs[51872] <INFO>: Quota for group 1001 is deleted [syncQuotaMaps@quota.go:340]
    base_test.go:3802: Testing space-only quota limit...
=== RUN   TestSQLiteClient/HardlinkQuota
2026/01/30 14:08:16.047669 juicefs[51872] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:8192 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:16.150953 juicefs[51872] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:8192 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:08:16.253511 juicefs[51872] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-8192 newInodes:-1} [doFlushQuotas@sql.go:4169]
=== RUN   TestSQLiteClient/BatchUnlinkWithUserGroupQuota
2026/01/30 14:08:16.357849 juicefs[51872] <INFO>: Quota for inode 4476 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:08:17.178108 juicefs[51872] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:08:17.180512 juicefs[51872] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:23.549736 juicefs[51872] <WARNING>: nlink of /check should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:08:23.549766 juicefs[51872] <WARNING>: Path /check (inode 4518) can be repaired, please re-run with '--path /check --repair' to fix it [Check@base.go:2444]
2026/01/30 14:08:23.549985 juicefs[51872] <WARNING>: nlink of /check should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:08:23.550102 juicefs[51872] <INFO>: Checked 1 nodes [Check@base.go:2507]
2026/01/30 14:08:23.550365 juicefs[51872] <WARNING>: nlink of /check/d1/d2 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:08:23.550472 juicefs[51872] <INFO>: Checked 1 nodes [Check@base.go:2507]
2026/01/30 14:08:23.582841 juicefs[51872] <WARNING>: nlink of /check/d1 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:08:23.582912 juicefs[51872] <WARNING>: nlink of /check/d1/d2/d3 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:08:23.582966 juicefs[51872] <WARNING>: nlink of /check/d1/d2/d3/d4 should be 2, but got 0 [Check@base.go:2417]
2026/01/30 14:08:23.603396 juicefs[51872] <INFO>: Checked 4143 nodes [Check@base.go:2507]
2026/01/30 14:08:23.611698 juicefs[51872] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:08:23.611760 juicefs[51872] <INFO>: Quota for inode 4480 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:08:23.611768 juicefs[51872] <INFO>: Quota for inode 4479 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:08:28.125730 juicefs[51872] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:08:28.127975 juicefs[51872] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:08:30.184215 juicefs[51872] <INFO>: Create read-only session OK with version: 1.4.0-dev+unknown [NewSession@base.go:700]
2026/01/30 14:08:30.184350 juicefs[51872] <WARNING>: Delete flock/plock with sid 2: read-only file system [doCleanStaleSession@sql.go:2930]
2026/01/30 14:08:30.184383 juicefs[51872] <INFO>: close session 2: failed to clean up sid 2 [CloseSession@base.go:894]
--- PASS: TestSQLiteClient (27.15s)
    --- PASS: TestSQLiteClient/BasicQuotaOperations (0.07s)
    --- PASS: TestSQLiteClient/QuotaFileOperations (2.00s)
    --- PASS: TestSQLiteClient/QuotaErrorCases (0.00s)
    --- PASS: TestSQLiteClient/QuotaConcurrentOperations (0.00s)
    --- PASS: TestSQLiteClient/QuotaMixedTypes (0.05s)
    --- PASS: TestSQLiteClient/QuotaUsageStatistics (2.00s)
    --- PASS: TestSQLiteClient/CheckQuotaFileOwner (0.00s)
    --- PASS: TestSQLiteClient/QuotaEdgeCases (0.00s)
    --- PASS: TestSQLiteClient/HardlinkQuota (0.31s)
    --- PASS: TestSQLiteClient/BatchUnlinkWithUserGroupQuota (0.82s)
PASS
ok      github.com/juicedata/juicefs/pkg/meta   27.921s

MySQL

MySQL Unit Test Output:
➜  juicefs git:(3-batch-clone-sql) ✗ go test -v -run TestMySQLClient -timeout 10m ./pkg/meta -count=1 
=== RUN   TestMySQLClient
2026/01/30 14:09:43.398276 juicefs[52488] <ERROR>: error: Error 1146 (42S02): Table 'dev.jfs_node' doesn't exist
goroutine 102 [running]:
runtime/debug.Stack()
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/runtime/debug/stack.go:26 +0x64
github.com/juicedata/juicefs/pkg/meta.errno({0x105da75e0, 0x14000854cc0})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/utils.go:153 +0xe4
github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doGetAttr(0x140008eae40?, {0x105e06e10?, 0x140008eaf80?}, 0x1?, 0x1400019a480?)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql.go:1329 +0x5c
github.com/juicedata/juicefs/pkg/meta.(*baseMeta).GetAttr.func1({0x0?, 0x102908184?})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base.go:1279 +0x40
github.com/juicedata/juicefs/pkg/utils.WithTimeout.func1()
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/utils/utils.go:116 +0x38
created by github.com/juicedata/juicefs/pkg/utils.WithTimeout in goroutine 32
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/utils/utils.go:115 +0xdc [errno@utils.go:153]
2026/01/30 14:09:43.619552 juicefs[52488] <INFO>: Create session 1 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:09:43.803172 juicefs[52488] <INFO>: doFlushquot ino:13, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:15} [doFlushQuotas@sql.go:4169]
2026/01/30 14:09:43.805071 juicefs[52488] <INFO>: doFlushquot ino:13, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-8192 newInodes:-20} [doFlushQuotas@sql.go:4169]
2026/01/30 14:09:43.934882 juicefs[52488] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:09:43.938894 juicefs[52488] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:09:44.174876 juicefs[52488] <WARNING>: File name is too long as a trash entry, truncating it: fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff -> 1-22-ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff [trashEntry@base.go:2854]
2026/01/30 14:09:44.294497 juicefs[52488] <INFO>: cleanup trash: deleted 10 files in 40.822625ms [CleanupTrashBefore@base.go:2927]
2026/01/30 14:09:59.527745 juicefs[52488] <INFO>: Quota for inode 13 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:10:30.892973 juicefs[52488] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:10:30.937005 juicefs[52488] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:10:31.837056 juicefs[52488] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:10:31.840853 juicefs[52488] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:11:14.695123 juicefs[52488] <WARNING>: Already tried 50 times, returning: Error 1062 (23000): Duplicate entry '0' for key 'jfs_chunk_ref.PRIMARY' [txn@sql.go:1076]
2026/01/30 14:11:14.695413 juicefs[52488] <ERROR>: error: Error 1062 (23000): Duplicate entry '0' for key 'jfs_chunk_ref.PRIMARY'
goroutine 32 [running]:
runtime/debug.Stack()
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/runtime/debug/stack.go:26 +0x64
github.com/juicedata/juicefs/pkg/meta.errno({0x105da75e0, 0x140034ffba8})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/utils.go:153 +0xe4
github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doWrite(0x140008eae40, {0x105e06e10?, 0x140010a9d40?}, 0x1037, 0x1?, 0x0?, {0x0?, 0x1?, 0x0?, 0x4a20000?}, ...)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql.go:3082 +0xac
github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Write(0x1400025f908, {0x105e06e10, 0x140010a9d40}, 0x1037, 0x0, 0x0, {0x140034c46d8?, 0x15db560?, 0x140?, 0x8ba900?}, ...)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base.go:2002 +0x19c
github.com/juicedata/juicefs/pkg/meta.testCompaction(0x14000603c00, {0x105e2a7d0, 0x140008eae40}, 0x1)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base_test.go:1714 +0x1284
github.com/juicedata/juicefs/pkg/meta.testMeta(0x14000603c00, {0x105e2a7d0, 0x140008eae40})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base_test.go:138 +0x13c
github.com/juicedata/juicefs/pkg/meta.TestMySQLClient(0x14000603c00)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql_test.go:41 +0x138
testing.tRunner(0x14000603c00, 0x105d937a8)
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/testing/testing.go:1997 +0x364 [errno@utils.go:153]
2026/01/30 14:11:14.754304 juicefs[52488] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:11:14.756676 juicefs[52488] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:11:14.824467 juicefs[52488] <INFO>: Create session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:11:14.875056 juicefs[52488] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:11:14.880952 juicefs[52488] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:11:15.012343 juicefs[52488] <INFO>: Update parent node affected rows = 0 should be 1 for inode = 4158 . [doMknod@sql.go:1773]
2026/01/30 14:11:15.406667 juicefs[52488] <WARNING>: no attribute for inode 4158 (1, d1) [doRmdir@sql.go:2039]
2026/01/30 14:11:15.409978 juicefs[52488] <WARNING>: no attribute for inode 4158 (1, d1) [doRmdir@sql.go:2039]
2026/01/30 14:11:15.413185 juicefs[52488] <WARNING>: no attribute for inode 4158 (1, d1) [doRmdir@sql.go:2039]
2026/01/30 14:11:15.500638 juicefs[52488] <INFO>: Update parent node affected rows = 0 should be 1 for inode = 4255 . [doUnlink@sql.go:1936]
2026/01/30 14:11:15.629158 juicefs[52488] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:11:18.632206 juicefs[52488] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:24576 newInodes:6} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:18.636138 juicefs[52488] <INFO>: doFlushquot ino:4469, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:18.639717 juicefs[52488] <INFO>: doFlushquot ino:4472, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:20.726504 juicefs[52488] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-4096 newInodes:-1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:20.764136 juicefs[52488] <INFO>: Quota for inode 4469 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:11:20.765976 juicefs[52488] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:20.767261 juicefs[52488] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:11:20.964429 juicefs[52488] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:11:20.979746 juicefs[52488] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
=== RUN   TestMySQLClient/BasicQuotaOperations
2026/01/30 14:11:21.045505 juicefs[52488] <INFO>: Quota for user 0 is deleted [syncQuotaMaps@quota.go:340]
=== RUN   TestMySQLClient/QuotaFileOperations
2026/01/30 14:11:21.091948 juicefs[52488] <INFO>: flush session 2: [FlushSession@base.go:905]
=== RUN   TestMySQLClient/QuotaErrorCases
=== RUN   TestMySQLClient/QuotaConcurrentOperations
2026/01/30 14:11:23.118688 juicefs[52488] <WARNING>: Transaction succeeded after 3 tries (7.188166ms), inodes: [], method: doSetQuota, last error: Error 1213 (40001): Deadlock found when trying to get lock; try restarting transaction [txn@sql.go:1072]
2026/01/30 14:11:23.124909 juicefs[52488] <WARNING>: Transaction succeeded after 4 tries (13.512958ms), inodes: [], method: doSetQuota, last error: Error 1213 (40001): Deadlock found when trying to get lock; try restarting transaction [txn@sql.go:1072]
=== RUN   TestMySQLClient/QuotaMixedTypes
=== RUN   TestMySQLClient/QuotaUsageStatistics
=== RUN   TestMySQLClient/CheckQuotaFileOwner
2026/01/30 14:11:25.242528 juicefs[52488] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:11:25.245742 juicefs[52488] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
=== RUN   TestMySQLClient/QuotaEdgeCases
    base_test.go:3786: Testing inodes-only quota limit...
2026/01/30 14:11:25.248164 juicefs[52488] <INFO>: Quota for group 1001 is deleted [syncQuotaMaps@quota.go:340]
    base_test.go:3802: Testing space-only quota limit...
=== RUN   TestMySQLClient/HardlinkQuota
2026/01/30 14:11:25.265190 juicefs[52488] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:8192 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:25.378192 juicefs[52488] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:8192 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:11:25.492633 juicefs[52488] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-8192 newInodes:-1} [doFlushQuotas@sql.go:4169]
=== RUN   TestMySQLClient/BatchUnlinkWithUserGroupQuota
2026/01/30 14:11:25.617506 juicefs[52488] <INFO>: Quota for inode 4476 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:11:26.588355 juicefs[52488] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:11:26.624494 juicefs[52488] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:11:33.213118 juicefs[52488] <WARNING>: nlink of /check should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:11:33.213134 juicefs[52488] <WARNING>: Path /check (inode 4518) can be repaired, please re-run with '--path /check --repair' to fix it [Check@base.go:2444]
2026/01/30 14:11:33.215139 juicefs[52488] <WARNING>: nlink of /check should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:11:33.216982 juicefs[52488] <INFO>: Checked 1 nodes [Check@base.go:2507]
2026/01/30 14:11:33.219386 juicefs[52488] <WARNING>: nlink of /check/d1/d2 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:11:33.221737 juicefs[52488] <INFO>: Checked 1 nodes [Check@base.go:2507]
2026/01/30 14:11:33.233402 juicefs[52488] <WARNING>: nlink of / should be 15, but got 12 [Check@base.go:2417]
2026/01/30 14:11:33.455400 juicefs[52488] <WARNING>: nlink of /check/d1 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:11:33.455708 juicefs[52488] <WARNING>: nlink of /check/d1/d2/d3/d4 should be 2, but got 0 [Check@base.go:2417]
2026/01/30 14:11:33.455754 juicefs[52488] <WARNING>: nlink of /check/d1/d2/d3 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:11:33.459182 juicefs[52488] <INFO>: Checked 4143 nodes [Check@base.go:2507]
2026/01/30 14:11:33.484870 juicefs[52488] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:11:33.485269 juicefs[52488] <INFO>: Quota for inode 4479 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:11:33.485276 juicefs[52488] <INFO>: Quota for inode 4480 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:11:38.114113 juicefs[52488] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:11:38.148035 juicefs[52488] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:11:40.441715 juicefs[52488] <INFO>: Create read-only session OK with version: 1.4.0-dev+unknown [NewSession@base.go:700]
2026/01/30 14:11:40.443126 juicefs[52488] <WARNING>: Delete flock/plock with sid 2: read-only file system [doCleanStaleSession@sql.go:2930]
2026/01/30 14:11:40.443519 juicefs[52488] <INFO>: close session 2: failed to clean up sid 2 [CloseSession@base.go:894]
--- PASS: TestMySQLClient (117.15s)
    --- PASS: TestMySQLClient/BasicQuotaOperations (0.10s)
    --- PASS: TestMySQLClient/QuotaFileOperations (2.03s)
    --- PASS: TestMySQLClient/QuotaErrorCases (0.01s)
    --- PASS: TestMySQLClient/QuotaConcurrentOperations (0.01s)
    --- PASS: TestMySQLClient/QuotaMixedTypes (0.04s)
    --- PASS: TestMySQLClient/QuotaUsageStatistics (2.04s)
    --- PASS: TestMySQLClient/CheckQuotaFileOwner (0.04s)
    --- PASS: TestMySQLClient/QuotaEdgeCases (0.00s)
    --- PASS: TestMySQLClient/HardlinkQuota (0.36s)
    --- PASS: TestMySQLClient/BatchUnlinkWithUserGroupQuota (0.91s)
PASS
ok      github.com/juicedata/juicefs/pkg/meta   117.831s

PostgreSQL

PostgreSQL Output:
➜  juicefs git:(3-batch-clone-sql) ✗ PGUSER=postgres PGPASSWORD=postgres go test -v -run TestPostgreSQLClient -timeout 10m ./pkg/meta

➜  juicefs git:(3-batch-clone-sql) ✗ PGUSER=postgres PGPASSWORD=postgres go test -v -run TestPostgreSQLClient -timeout 10m ./pkg/meta -count=1
=== RUN   TestPostgreSQLClient
2026/01/30 14:13:52.039534 juicefs[54986] <WARNING>: The latency to database is too high: 7.597208ms [newSQLMeta@sql.go:518]
2026/01/30 14:13:52.056251 juicefs[54986] <ERROR>: error: ERROR: relation "jfs_node" does not exist (SQLSTATE 42P01)
goroutine 49 [running]:
runtime/debug.Stack()
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/runtime/debug/stack.go:26 +0x64
github.com/juicedata/juicefs/pkg/meta.errno({0x107acf5c0, 0x14001218800})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/utils.go:153 +0xe4
github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doGetAttr(0x1400078b400?, {0x107b2ee10?, 0x1400078b540?}, 0x1?, 0x1400121a480?)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql.go:1329 +0x5c
github.com/juicedata/juicefs/pkg/meta.(*baseMeta).GetAttr.func1({0x0?, 0x0?})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base.go:1279 +0x40
github.com/juicedata/juicefs/pkg/utils.WithTimeout.func1()
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/utils/utils.go:116 +0x38
created by github.com/juicedata/juicefs/pkg/utils.WithTimeout in goroutine 44
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/utils/utils.go:115 +0xdc [errno@utils.go:153]
2026/01/30 14:13:52.127401 juicefs[54986] <INFO>: Create session 1 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:13:52.197899 juicefs[54986] <INFO>: doFlushquot ino:13, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:15} [doFlushQuotas@sql.go:4169]
2026/01/30 14:13:52.198549 juicefs[54986] <INFO>: doFlushquot ino:13, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-8192 newInodes:-20} [doFlushQuotas@sql.go:4169]
2026/01/30 14:13:52.317461 juicefs[54986] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:13:52.319679 juicefs[54986] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:13:52.514379 juicefs[54986] <WARNING>: File name is too long as a trash entry, truncating it: fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff -> 1-22-ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff [trashEntry@base.go:2854]
2026/01/30 14:13:52.557592 juicefs[54986] <INFO>: cleanup trash: deleted 10 files in 13.955292ms [CleanupTrashBefore@base.go:2927]
2026/01/30 14:13:58.242806 juicefs[54986] <INFO>: Quota for inode 13 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:14:11.200057 juicefs[54986] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:14:11.223290 juicefs[54986] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:14:11.751463 juicefs[54986] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:14:11.754321 juicefs[54986] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:14:54.256261 juicefs[54986] <WARNING>: Already tried 50 times, returning: ERROR: duplicate key value violates unique constraint "jfs_chunk_ref_pkey" (SQLSTATE 23505) [txn@sql.go:1076]
2026/01/30 14:14:54.256578 juicefs[54986] <ERROR>: error: ERROR: duplicate key value violates unique constraint "jfs_chunk_ref_pkey" (SQLSTATE 23505)
goroutine 44 [running]:
runtime/debug.Stack()
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/runtime/debug/stack.go:26 +0x64
github.com/juicedata/juicefs/pkg/meta.errno({0x107acf5c0, 0x14000cda600})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/utils.go:153 +0xe4
github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doWrite(0x1400078b400, {0x107b2ee10?, 0x1400152f1c0?}, 0x1037, 0x1?, 0x0?, {0x0?, 0x1?, 0x0?, 0x6740000?}, ...)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql.go:3082 +0xac
github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Write(0x14000942008, {0x107b2ee10, 0x1400152f1c0}, 0x1037, 0x0, 0x0, {0x14001aaa648?, 0x441b00?, 0x140?, 0x128c700?}, ...)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base.go:2002 +0x19c
github.com/juicedata/juicefs/pkg/meta.testCompaction(0x140002b9a40, {0x107b527d0, 0x1400078b400}, 0x1)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base_test.go:1714 +0x1284
github.com/juicedata/juicefs/pkg/meta.testMeta(0x140002b9a40, {0x107b527d0, 0x1400078b400})
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/base_test.go:138 +0x13c
github.com/juicedata/juicefs/pkg/meta.TestPostgreSQLClient(0x140002b9a40)
        /Users/eitan.akman/notes/juicefs-batch-clone/juicefs/pkg/meta/sql_test.go:52 +0x188
testing.tRunner(0x140002b9a40, 0x107abb7c8)
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
        /opt/homebrew/Cellar/go/1.25.3/libexec/src/testing/testing.go:1997 +0x364 [errno@utils.go:153]
2026/01/30 14:14:54.305502 juicefs[54986] <INFO>: flush session 1: [FlushSession@base.go:905]
2026/01/30 14:14:54.307398 juicefs[54986] <INFO>: close session 1: <nil> [CloseSession@base.go:894]
2026/01/30 14:14:54.350713 juicefs[54986] <INFO>: Create session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:14:54.370623 juicefs[54986] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:14:54.372921 juicefs[54986] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:14:54.710005 juicefs[54986] <WARNING>: no attribute for inode 4163 (1, d1) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.710010 juicefs[54986] <WARNING>: no attribute for inode 4163 (1, d1) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.714359 juicefs[54986] <WARNING>: no attribute for inode 4163 (1, d1) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.805720 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.805947 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806008 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806097 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806106 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806135 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806137 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806144 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806223 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806235 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806291 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.806531 juicefs[54986] <WARNING>: no attribute for inode 4255 (1, d2) [doRmdir@sql.go:2039]
2026/01/30 14:14:54.852118 juicefs[54986] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:14:57.857164 juicefs[54986] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:24576 newInodes:6} [doFlushQuotas@sql.go:4169]
2026/01/30 14:14:57.860341 juicefs[54986] <INFO>: doFlushquot ino:4469, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:14:57.861615 juicefs[54986] <INFO>: doFlushquot ino:4472, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:14:59.931769 juicefs[54986] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-4096 newInodes:-1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:14:59.958637 juicefs[54986] <INFO>: Quota for inode 4469 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:14:59.958832 juicefs[54986] <INFO>: doFlushquot ino:4468, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:4096 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:14:59.959491 juicefs[54986] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:15:00.039259 juicefs[54986] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:15:00.047099 juicefs[54986] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
=== RUN   TestPostgreSQLClient/BasicQuotaOperations
2026/01/30 14:15:00.089585 juicefs[54986] <INFO>: Quota for user 0 is deleted [syncQuotaMaps@quota.go:340]
=== RUN   TestPostgreSQLClient/QuotaFileOperations
2026/01/30 14:15:00.124838 juicefs[54986] <INFO>: flush session 2: [FlushSession@base.go:905]
=== RUN   TestPostgreSQLClient/QuotaErrorCases
=== RUN   TestPostgreSQLClient/QuotaConcurrentOperations
=== RUN   TestPostgreSQLClient/QuotaMixedTypes
=== RUN   TestPostgreSQLClient/QuotaUsageStatistics
=== RUN   TestPostgreSQLClient/CheckQuotaFileOwner
2026/01/30 14:15:04.268379 juicefs[54986] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:15:04.270761 juicefs[54986] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
=== RUN   TestPostgreSQLClient/QuotaEdgeCases
    base_test.go:3786: Testing inodes-only quota limit...
2026/01/30 14:15:04.272431 juicefs[54986] <INFO>: Quota for group 1001 is deleted [syncQuotaMaps@quota.go:340]
    base_test.go:3802: Testing space-only quota limit...
=== RUN   TestPostgreSQLClient/HardlinkQuota
2026/01/30 14:15:04.283877 juicefs[54986] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:8192 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:15:04.394179 juicefs[54986] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:8192 newInodes:1} [doFlushQuotas@sql.go:4169]
2026/01/30 14:15:04.500086 juicefs[54986] <INFO>: doFlushquot ino:4476, &{MaxSpace:0 MaxInodes:0 UsedSpace:0 UsedInodes:0 newSpace:-8192 newInodes:-1} [doFlushQuotas@sql.go:4169]
=== RUN   TestPostgreSQLClient/BatchUnlinkWithUserGroupQuota
2026/01/30 14:15:04.609465 juicefs[54986] <INFO>: Quota for inode 4476 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:15:05.533178 juicefs[54986] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:15:05.556847 juicefs[54986] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:15:12.051082 juicefs[54986] <WARNING>: nlink of /check should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.051195 juicefs[54986] <WARNING>: Path /check (inode 4518) can be repaired, please re-run with '--path /check --repair' to fix it [Check@base.go:2444]
2026/01/30 14:15:12.052273 juicefs[54986] <WARNING>: nlink of /check should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.053355 juicefs[54986] <INFO>: Checked 1 nodes [Check@base.go:2507]
2026/01/30 14:15:12.054568 juicefs[54986] <WARNING>: nlink of /check/d1/d2 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.055515 juicefs[54986] <INFO>: Checked 1 nodes [Check@base.go:2507]
2026/01/30 14:15:12.063340 juicefs[54986] <WARNING>: nlink of / should be 15, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.184553 juicefs[54986] <WARNING>: nlink of /check/d1 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.184561 juicefs[54986] <WARNING>: nlink of /check/d1/d2/d3/d4 should be 2, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.184598 juicefs[54986] <WARNING>: nlink of /check/d1/d2/d3 should be 3, but got 0 [Check@base.go:2417]
2026/01/30 14:15:12.186670 juicefs[54986] <INFO>: Checked 4143 nodes [Check@base.go:2507]
2026/01/30 14:15:12.215366 juicefs[54986] <INFO>: Update session 2 OK with version: 1.4.0-dev+unknown [NewSession@base.go:719]
2026/01/30 14:15:12.216016 juicefs[54986] <INFO>: Quota for inode 4479 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:15:12.216025 juicefs[54986] <INFO>: Quota for inode 4480 is deleted [syncQuotaMaps@quota.go:340]
2026/01/30 14:15:16.821733 juicefs[54986] <INFO>: flush session 2: [FlushSession@base.go:905]
2026/01/30 14:15:16.831205 juicefs[54986] <INFO>: close session 2: <nil> [CloseSession@base.go:894]
2026/01/30 14:15:19.029493 juicefs[54986] <INFO>: Create read-only session OK with version: 1.4.0-dev+unknown [NewSession@base.go:700]
2026/01/30 14:15:19.030352 juicefs[54986] <WARNING>: Delete flock/plock with sid 2: read-only file system [doCleanStaleSession@sql.go:2930]
2026/01/30 14:15:19.030638 juicefs[54986] <INFO>: close session 2: failed to clean up sid 2 [CloseSession@base.go:894]
--- PASS: TestPostgreSQLClient (87.00s)
    --- PASS: TestPostgreSQLClient/BasicQuotaOperations (0.07s)
    --- PASS: TestPostgreSQLClient/QuotaFileOperations (2.02s)
    --- PASS: TestPostgreSQLClient/QuotaErrorCases (0.01s)
    --- PASS: TestPostgreSQLClient/QuotaConcurrentOperations (0.01s)
    --- PASS: TestPostgreSQLClient/QuotaMixedTypes (0.06s)
    --- PASS: TestPostgreSQLClient/QuotaUsageStatistics (2.03s)
    --- PASS: TestPostgreSQLClient/CheckQuotaFileOwner (0.03s)
    --- PASS: TestPostgreSQLClient/QuotaEdgeCases (0.00s)
    --- PASS: TestPostgreSQLClient/HardlinkQuota (0.33s)
    --- PASS: TestPostgreSQLClient/BatchUnlinkWithUserGroupQuota (0.89s)
=== RUN   TestPostgreSQLClientWithSearchPath
--- PASS: TestPostgreSQLClientWithSearchPath (0.00s)
PASS
ok      github.com/juicedata/juicefs/pkg/meta   87.677s

@codecov
Copy link

codecov bot commented Jan 30, 2026

Codecov Report

❌ Patch coverage is 69.82968% with 124 lines in your changes missing coverage. Please review.
✅ Project coverage is 42.20%. Comparing base (ae22a98) to head (d5b1f69).
⚠️ Report is 44 commits behind head on main.

Files with missing lines Patch % Lines
pkg/meta/sql.go 74.50% 52 Missing and 25 partials ⚠️
pkg/meta/base.go 65.43% 21 Missing and 7 partials ⚠️
cmd/clone.go 38.46% 8 Missing ⚠️
pkg/vfs/internal.go 0.00% 7 Missing ⚠️
pkg/meta/tkv.go 50.00% 1 Missing and 1 partial ⚠️
pkg/fs/fs.go 0.00% 1 Missing ⚠️
pkg/meta/redis.go 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6656      +/-   ##
==========================================
- Coverage   51.45%   42.20%   -9.25%     
==========================================
  Files         169      168       -1     
  Lines       52242    53297    +1055     
==========================================
- Hits        26882    22496    -4386     
- Misses      22336    28004    +5668     
+ Partials     3024     2797     -227     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a SQL batch clone optimization for JuiceFS metadata operations. The primary goal is to improve the performance of cloning file operations by batching multiple non-directory entries into a single database transaction, reducing the number of database round-trips.

Changes:

  • Introduces doBatchClone method for SQL backend that processes multiple file/symlink entries in one transaction
  • Modifies cloneEntry in base.go to use batch cloning for non-directory entries with fallback to sequential cloning
  • Adds stub implementations returning ENOTSUP for Redis and TKV backends to maintain compatibility

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.

File Description
pkg/meta/base.go Adds BatchClone public API and refactors cloneEntry to separate directories from files, calling BatchClone for files
pkg/meta/sql.go Implements doBatchClone with optimized batch INSERT/UPDATE operations and aggregated chunk_ref updates
pkg/meta/redis.go Adds stub doBatchClone returning ENOTSUP for Redis backend
pkg/meta/tkv.go Adds stub doBatchClone returning ENOTSUP for TKV backend

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

pkg/meta/base.go Outdated
Comment on lines 3285 to 3330
wg.Wait()
if len(errCh) > 0 {
eno = <-errCh
goto END
}

// Batch clone all non-directory entries from this batch
if len(nonDirEntries) > 0 {
if eno = m.BatchClone(ctx, srcIno, ino, nonDirEntries, cmode, cumask, count); eno == syscall.ENOTSUP {
// Fallback: clone each file concurrently (same pattern as directories)
for _, e := range nonDirEntries {
select {
case concurrent <- struct{}{}:
wg.Add(1)
go func(entry *Entry) {
defer wg.Done()
childEno := cloneChild(entry)
if childEno != 0 {
errCh <- childEno
}
<-concurrent
}(e)
default:
// Synchronous fallback when channel is full
if childEno := cloneChild(e); childEno != 0 {
eno = childEno
goto END
}
}
}
// Reset error after spawning goroutines - errors will be reported via errCh
eno = 0
} else if eno != 0 {
goto END
}
}

offset += len(batchEntries)
if ctx.Canceled() {
eno = syscall.EINTR
break
goto END
}
}

END:
wg.Wait()
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential WaitGroup race condition in the fallback path. After spawning goroutines for nonDirEntries (lines 3295-3314), wg.Wait() is called again at line 3330. However, if the BatchClone succeeds (eno == 0), no goroutines were spawned for nonDirEntries, so the second wg.Wait() would only wait for the directory goroutines which were already waited for at line 3285. While this is not necessarily incorrect, it creates an unnecessary wait. More critically, if errors occur between lines 3293-3319, the code jumps to END without waiting for the spawned fallback goroutines, which could lead to goroutines writing to errCh after the function returns.

Copilot uses AI. Check for mistakes.
@zhijian-pro zhijian-pro marked this pull request as ready for review February 3, 2026 09:46
@zhijian-pro
Copy link
Contributor

Could you provide the performance test report and the performance comparison before and after optimization ?

@eakman-datadog
Copy link
Contributor Author

Could you provide the performance test report and the performance comparison before and after optimization ?

Yes, will do so today

@eakman-datadog
Copy link
Contributor Author

eakman-datadog commented Feb 3, 2026

@zhijian-pro @jiefenghuang @vyalamar

This PR is still a tiny bit rough, but I think it's enough to get started conversing about the solution. In terms of performance here's what I observed with the directory-level batching approach.

Baseline

The baseline in my environment. This is with a remote postgres metadata engine.

root@mount-pod-tester-54c59f7f79-fckv6:/# time juicefs clone /data/clone-testing/kubernetes/ /data/clone-testing/kubernetes1
Cloning entries: 34141/34141 [==============================================================]  234.1/s used: 2m25.833438123s

real    2m25.887s
user    0m0.335s
sys     0m0.258s
root@mount-pod-tester-54c59f7f79-fckv6:/# 

Optimized version

With the directory-level batching, I see a significant improvement though not quite at the level, which I think is theoretically possible.

root@mount-pod-tester-54c59f7f79-fckv6:/# time ./juicefs-exp clone /data/clone-testing/kubernetes/ /data/clone-testing/kubern
etes2
Cloning entries: 34141/34141 [==============================================================]  399.6/s used: 1m25.431745249s

real    1m25.488s
user    0m0.220s
sys     0m0.155s

Optimized version + increased concurrency

To improve this somewhat, I added a concurrency flag. Presently, this is fixed at 4. With the flag it can be customized as needed. This shows a noticeable improvement still.

root@mount-pod-tester-54c59f7f79-fckv6:/# time ./juicefs-exp clone --concurrency 32 /data/clone-testing/kubernetes/ /data/clo
ne-testing/kubernetes3
Cloning entries: 34141/34141 [==============================================================]  1427.0/s used: 23.924722174s

real    0m23.985s
user    0m0.103s
sys     0m0.068s
root@mount-pod-tester-54c59f7f79-fckv6:/# 

Thoughts?

If we wanted to go the cross-dir batching route, I think we could achieve better performance. But let me know your opinions or if you see a way to make this current approach more performant.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 9 out of 9 changed files in this pull request and generated 7 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

pkg/meta/base.go Outdated
Comment on lines 3287 to 3291
// Check for errors from concurrent subdir processing (non-blocking)
select {
case e := <-errCh:
eno = e
goto END
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential deadlock: The error channel errCh (created earlier in the function at line 3224 with capacity cap(concurrent)) could cause deadlocks. If multiple concurrent goroutines encounter errors and try to send to errCh while the main goroutine has already jumped to END with a different error (e.g., from line 3270, 3281, 3314, or 3322), the goroutines could block on sending to the full channel. The wg.Wait() at line 3334 would then wait indefinitely for blocked goroutines. Consider either: (1) making errCh buffered with a larger capacity, or (2) using context cancellation to signal all goroutines to stop when an error occurs, or (3) having a separate goroutine drain errCh.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@zhijian-pro zhijian-pro Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eakman-datadog Perhaps using errorgroup is simpler and clearer ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Is using errorgroup a good practice here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I refactored to use errorgroup

pkg/meta/sql.go Outdated
nodesIns = append(nodesIns, &info.dstNode)
edgesIns = append(edgesIns, &edge{
Parent: dstParent,
Name: []byte(info.entry.Name),
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary type conversion: info.entry.Name is already []byte (as defined in Entry struct at pkg/meta/interface.go:308), so the conversion []byte(info.entry.Name) creates an unnecessary copy of the byte slice. This is a minor performance issue but not a bug. Consider using info.entry.Name directly to avoid the allocation.

Suggested change
Name: []byte(info.entry.Name),
Name: info.entry.Name,

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

pkg/meta/sql.go Outdated
chunksIns := make([]interface{}, 0)
symlinksIns := make([]interface{}, 0)

// CRITICAL: Aggregate chunk_ref updates (addresses TODO at line 5050)
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Outdated comment reference: The comment mentions "addresses TODO at line 5050" but this refers to the old doCloneEntry function's TODO comment, not a line in the current function. While the implementation does address that TODO by aggregating chunk_ref updates, the comment should either reference the doCloneEntry function explicitly (e.g., "addresses TODO in doCloneEntry at line 5050") or be reworded to describe what's being done without the line reference, for better maintainability.

Suggested change
// CRITICAL: Aggregate chunk_ref updates (addresses TODO at line 5050)
// CRITICAL: Aggregate chunk_ref updates per chunk to minimize database operations

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixing...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

cmd/clone.go Outdated
Usage: "preserve the uid, gid, and mode of the file. (This is forced on Windows)",
},
&cli.IntFlag{
Name: "concurrency",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

threads

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixing...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@zhijian-pro
Copy link
Contributor

Adding the --threads parameter to control concurrency is a good suggestion.

@zhijian-pro @jiefenghuang @vyalamar

This PR is still a tiny bit rough, but I think it's enough to get started conversing about the solution. In terms of performance here's what I observed with the directory-level batching approach.

Baseline

The baseline in my environment. This is with a remote postgres metadata engine.

root@mount-pod-tester-54c59f7f79-fckv6:/# time juicefs clone /data/clone-testing/kubernetes/ /data/clone-testing/kubernetes1
Cloning entries: 34141/34141 [==============================================================]  234.1/s used: 2m25.833438123s

real    2m25.887s
user    0m0.335s
sys     0m0.258s
root@mount-pod-tester-54c59f7f79-fckv6:/# 

Optimized version

With the directory-level batching, I see a significant improvement though not quite at the level, which I think is theoretically possible.

root@mount-pod-tester-54c59f7f79-fckv6:/# time ./juicefs-exp clone /data/clone-testing/kubernetes/ /data/clone-testing/kubern
etes2
Cloning entries: 34141/34141 [==============================================================]  399.6/s used: 1m25.431745249s

real    1m25.488s
user    0m0.220s
sys     0m0.155s

Optimized version + increased concurrency

To improve this somewhat, I added a concurrency flag. Presently, this is fixed at 4. With the flag it can be customized as needed. This shows a noticeable improvement still.

root@mount-pod-tester-54c59f7f79-fckv6:/# time ./juicefs-exp clone --concurrency 32 /data/clone-testing/kubernetes/ /data/clo
ne-testing/kubernetes3
Cloning entries: 34141/34141 [==============================================================]  1427.0/s used: 23.924722174s

real    0m23.985s
user    0m0.103s
sys     0m0.068s
root@mount-pod-tester-54c59f7f79-fckv6:/# 

Thoughts?

If we wanted to go the cross-dir batching route, I think we could achieve better performance. But let me know your opinions or if you see a way to make this current approach more performant.

Theoretically, there should be better performance, especially for scenarios with high network latency in metadata engine, large directories, large files, and multiple extended attributes.

doDeleteSlice(id uint64, size uint32) error

doCloneEntry(ctx Context, srcIno Ino, parent Ino, name string, ino Ino, attr *Attr, cmode uint8, cumask uint16, top bool) syscall.Errno
doBatchClone(ctx Context, srcParent Ino, dstParent Ino, entries []*Entry, cmode uint8, cumask uint16, length *int64, space *int64, inodes *int64, userGroupQuotas *[]userGroupQuotaDelta) syscall.Errno
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mark: Combine the statistics-related parameters into a single structure in new PR. Do the same for doBatchUnlink.

pkg/meta/base.go Outdated
if eno = batchEno; batchEno != 0 {
break
if batchEno != 0 {
return batchEno
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait for child

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

pkg/meta/base.go Outdated
// Batch clone files immediately (don't wait for subdirs to finish)
if len(nonDirEntries) > 0 {
if eno = m.BatchClone(ctx, srcIno, ino, nonDirEntries, cmode, cumask, count); eno == syscall.ENOTSUP {
// Fallback: clone each file concurrently (same pattern as directories)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mark: move duplicate code into a func in new pr

pkg/meta/sql.go Outdated
})

// Process in batches (500 per query to avoid size limits)
batchSize := 500
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

txnBatchNum

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

pkg/meta/base.go Outdated
// Synchronous fallback when channel is full
if childEno := cloneChild(e); childEno != 0 {
eno = childEno
goto END
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mark: simplify code without goto in new pr

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this indirectly

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 9 out of 9 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

cmd/clone.go Outdated
},
&cli.IntFlag{
Name: "threads",
Value: 4,
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default concurrency value should be defined as a constant. The value 4 is hardcoded in multiple places (cmd/clone.go:58, pkg/vfs/internal.go:338, pkg/fs/fs.go:1054) and used as the default concurrency for clone operations. Consider defining this as a constant (e.g., CLONE_DEFAULT_CONCURRENCY = 4) in pkg/meta/utils.go alongside other clone-related constants like CLONE_MODE_PRESERVE_ATTR, to improve maintainability and ensure consistency across the codebase.

Suggested change
Value: 4,
Value: meta.CLONE_DEFAULT_CONCURRENCY,

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

defer handler.Close()

var wg sync.WaitGroup
var g errgroup.Group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use the setlimit control of the error group to manage concurrency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Good call!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, I think it's better to keep the concurrent channel. The reason for this is that currently, with the current approach, each call to cloneEntry waits for all of its child entries to complete, before returning i.e.:

	// Wait for all goroutines and get first error
	if err := g.Wait(); err != nil && eno == 0 {
		eno = err.(syscall.Errno)
	}

If we were to pass in a global errorgroup, shared by all goroutines, the above section would take on a slightly different meaning. It would wait until all goroutines in the errorgroup have completed. I.e. not just children. Could be siblings, cousins, etc.

That might be okay, but I think it makes more sense to keep it the way it is.

return nil
})
default:
// Synchronous fallback when concurrency limit reached
Copy link
Contributor

@zhijian-pro zhijian-pro Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error group also has a trygo function, which you can also take a look at

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed. See #6656 (comment)

@eakman-datadog
Copy link
Contributor Author

@zhijian-pro @jiefenghuang don't merge. I seem to have introduced a bug somewhere. Need some time to debug.

@eakman-datadog
Copy link
Contributor Author

eakman-datadog commented Feb 10, 2026

@zhijian-pro @jiefenghuang This should be ready for a re-review now. FYI, while I was cleaning this up and doing some testing, I noticed a bug with the batch clone implementation and sqlite. With sqlite, at the beginning of the txn function we acquire a global write lock.

In doBatchClone, I was beginning a transaction via txn and then within the transaction calling dstIno, err := m.nextInode() for each directory entry. In some cases, when the in-memory inode pool is exhausted, nextInode will actually call txn itself, which in turn will attempt to acquire the write lock. This results in deadlock.

To resolve this, it's been changed to allocate the inodes prior to beginning the main transaction (see bb3a338).

Incoroporate newly added doBatchClone engine function and rework clone looping mechanics.
1. consolidate cloneInfo loops
2. remove directory related code for doBatchClone as we only deal with non-directories
3. undo changes to makefile
4. rename concurrency CLI argument to threads
5. refactor to use errorgroup instead of WaitGroup to eliminate deadlock potential and simplify code
6. removes unneeded memory management line
7. properly track subdirectory info
8. remove unneeded type casting
9. use getTxnBatchNum
10. Fix outdated comment
This reverts commit b1e36a8.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants