Compare commits

..

No commits in common. "master" and "v5.0.0" have entirely different histories.

17 changed files with 327 additions and 720 deletions

View File

@ -18,9 +18,13 @@ jobs:
- name: chkfmt - name: chkfmt
run: scripts/chkfmt run: scripts/chkfmt
- name: prep-test
run: scripts/run_test_prep
- name: tests - name: tests
run: | run: |
scripts/tests scripts/tests
scripts/run_tests
- name: xbuild - name: xbuild
run: scripts/xbuild run: scripts/xbuild

View File

@ -17,9 +17,13 @@ jobs:
- name: chkfmt - name: chkfmt
run: scripts/chkfmt run: scripts/chkfmt
- name: prep-test
run: scripts/run_test_prep
- name: tests - name: tests
run: | run: |
scripts/tests scripts/tests
scripts/run_tests
- name: xbuild - name: xbuild
run: version=${GITHUB_REF#$"refs/tags/v"} scripts/xbuild run: version=${GITHUB_REF#$"refs/tags/v"} scripts/xbuild

View File

@ -3,16 +3,15 @@
chkbit is a tool that ensures the safety of your files by checking if their *data integrity remains intact over time*, especially during transfers and backups. It helps detect issues like disk damage, filesystem errors, and malware interference. chkbit is a tool that ensures the safety of your files by checking if their *data integrity remains intact over time*, especially during transfers and backups. It helps detect issues like disk damage, filesystem errors, and malware interference.
![gif of chkbit](https://raw.githubusercontent.com/wiki/laktak/chkbit/readme/chkbit.gif "chkbit") ![gif of chkbit](https://raw.githubusercontent.com/laktak/chkbit-py/readme/readme/chkbit-py.gif "chkbit")
- [How it works](#how-it-works) - [How it works](#how-it-works)
- [Installation](#installation) - [Installation](#installation)
- [Usage](#usage) - [Usage](#usage)
- [Repair](#repair) - [Repair](#repair)
- [Ignore files](#ignore-files) - [Ignore files](#ignore-files)
- [chkbit as a Go module](#chkbit-as-a-go-module)
- [FAQ](#faq) - [FAQ](#faq)
- [Development](#development)
## How it works ## How it works
@ -28,39 +27,11 @@ Remember to always maintain multiple backups for comprehensive data protection.
## Installation ## Installation
- Install via [Homebrew](https://formulae.brew.sh/formula/chkbit) for macOS and Linux:
### Binary releases ```
brew install chkbit
You can download the official chkbit binaries from the releases page and place it in your `PATH`. ```
- Download for [Linux, macOS or Windows](https://github.com/laktak/chkbit/releases).
- https://github.com/laktak/chkbit/releases
### Homebrew (macOS and Linux)
For macOS and Linux it can also be installed via [Homebrew](https://formulae.brew.sh/formula/chkbit):
```shell
brew install chkbit
```
### Build from Source
Building from the source requires Go.
- Either install it directly
```shell
go install github.com/laktak/chkbit/v5/cmd/chkbit@latest
```
- or clone and build
```shell
git clone https://github.com/laktak/chkbit
chkbit/scripts/build
# binary:
ls -l chkbit/chkbit
```
## Usage ## Usage
@ -84,18 +55,13 @@ Arguments:
Flags: Flags:
-h, --help Show context-sensitive help. -h, --help Show context-sensitive help.
-H, --tips Show tips. -H, --tips Show tips.
-c, --check check mode: chkbit will verify files in readonly mode (default mode) -u, --update update indices (without this chkbit will verify files in readonly mode)
-u, --update update mode: add and update indices --show-ignored-only only show ignored files
-a, --add-only add mode: only add new files, do not check existing (quicker) --algo="blake3" hash algorithm: md5, sha512, blake3 (default: blake3)
-i, --show-ignored-only show-ignored mode: only show ignored files -f, --force force update of damaged items
-m, --show-missing show missing files/directories -s, --skip-symlinks do not follow symlinks
--force force update of damaged items (advanced usage only)
-S, --skip-symlinks do not follow symlinks
-R, --no-recurse do not recurse into subdirectories
-D, --no-dir-in-index do not track directories in the index
-l, --log-file=STRING write to a logfile if specified -l, --log-file=STRING write to a logfile if specified
--log-verbose verbose logging --log-verbose verbose logging
--algo="blake3" hash algorithm: md5, sha512, blake3 (default: blake3)
--index-name=".chkbit" filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit) --index-name=".chkbit" filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit)
--ignore-name=".chkbitignore" filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore) --ignore-name=".chkbitignore" filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore)
-w, --workers=5 number of workers to use (default: 5) -w, --workers=5 number of workers to use (default: 5)
@ -105,27 +71,6 @@ Flags:
-V, --version show version information -V, --version show version information
``` ```
```
$ chkbit -H
.chkbitignore rules:
each line should contain exactly one name
you may use Unix shell-style wildcards (see README)
lines starting with '#' are skipped
lines starting with '/' are only applied to the current directory
Status codes:
DMG: error, data damage detected
EIX: error, index damaged
old: warning, file replaced by an older version
new: new file
upd: file updated
ok : check ok
del: file/directory removed
ign: ignored (see .chkbitignore)
EXC: exception/panic
```
chkbit is set to use only 5 workers by default so it will not slow your system to a crawl. You can specify a higher number to make it a lot faster if the IO throughput can also keep up. chkbit is set to use only 5 workers by default so it will not slow your system to a crawl. You can specify a higher number to make it a lot faster if the IO throughput can also keep up.
@ -155,17 +100,6 @@ Add a `.chkbitignore` file containing the names of the files/directories you wis
- hidden files (starting with a `.`) are ignored by default - hidden files (starting with a `.`) are ignored by default
## chkbit as a Go module
chkbit is can also be used in other Go programs.
```
go get github.com/laktak/chkbit/v5
```
For more information see the documentation on [pkg.go.dev](https://pkg.go.dev/github.com/laktak/chkbit/v5).
## FAQ ## FAQ
### Should I run `chkbit` on my whole drive? ### Should I run `chkbit` on my whole drive?

View File

@ -1,6 +1,6 @@
package main package main
var headerHelp = `Checks the data integrity of your files. var headerHelp = `Checks the data integrity of your files.
For help tips run "chkbit -H" or go to For help tips run "chkbit -H" or go to
https://github.com/laktak/chkbit https://github.com/laktak/chkbit
` `
@ -19,7 +19,6 @@ Status codes:
new: new file new: new file
upd: file updated upd: file updated
ok : check ok ok : check ok
del: file/directory removed
ign: ignored (see .chkbitignore) ign: ignored (see .chkbitignore)
EXC: exception/panic EXC: exception/panic
` `

View File

@ -10,8 +10,8 @@ import (
"time" "time"
"github.com/alecthomas/kong" "github.com/alecthomas/kong"
"github.com/laktak/chkbit/v5" "github.com/laktak/chkbit"
"github.com/laktak/chkbit/v5/cmd/chkbit/util" "github.com/laktak/chkbit/cmd/chkbit/util"
"github.com/laktak/lterm" "github.com/laktak/lterm"
) )
@ -25,7 +25,7 @@ const (
) )
const ( const (
updateInterval = time.Millisecond * 700 updateInterval = time.Millisecond * 300
sizeMB int64 = 1024 * 1024 sizeMB int64 = 1024 * 1024
) )
@ -44,18 +44,13 @@ var (
var cli struct { var cli struct {
Paths []string `arg:"" optional:"" name:"paths" help:"directories to check"` Paths []string `arg:"" optional:"" name:"paths" help:"directories to check"`
Tips bool `short:"H" help:"Show tips."` Tips bool `short:"H" help:"Show tips."`
Check bool `short:"c" help:"check mode: chkbit will verify files in readonly mode (default mode)"` Update bool `short:"u" help:"update indices (without this chkbit will verify files in readonly mode)"`
Update bool `short:"u" help:"update mode: add and update indices"` ShowIgnoredOnly bool `help:"only show ignored files"`
AddOnly bool `short:"a" help:"add mode: only add new files, do not check existing (quicker)"` Algo string `default:"blake3" help:"hash algorithm: md5, sha512, blake3 (default: blake3)"`
ShowIgnoredOnly bool `short:"i" help:"show-ignored mode: only show ignored files"` Force bool `short:"f" help:"force update of damaged items"`
ShowMissing bool `short:"m" help:"show missing files/directories"` SkipSymlinks bool `short:"s" help:"do not follow symlinks"`
Force bool `help:"force update of damaged items (advanced usage only)"`
SkipSymlinks bool `short:"S" help:"do not follow symlinks"`
NoRecurse bool `short:"R" help:"do not recurse into subdirectories"`
NoDirInIndex bool `short:"D" help:"do not track directories in the index"`
LogFile string `short:"l" help:"write to a logfile if specified"` LogFile string `short:"l" help:"write to a logfile if specified"`
LogVerbose bool `help:"verbose logging"` LogVerbose bool `help:"verbose logging"`
Algo string `default:"blake3" help:"hash algorithm: md5, sha512, blake3 (default: blake3)"`
IndexName string `default:".chkbit" help:"filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit)"` IndexName string `default:".chkbit" help:"filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit)"`
IgnoreName string `default:".chkbitignore" help:"filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore)"` IgnoreName string `default:".chkbitignore" help:"filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore)"`
Workers int `short:"w" default:"5" help:"number of workers to use (default: 5)"` Workers int `short:"w" default:"5" help:"number of workers to use (default: 5)"`
@ -66,68 +61,76 @@ var cli struct {
} }
type Main struct { type Main struct {
context *chkbit.Context
dmgList []string dmgList []string
errList []string errList []string
numIdxUpd int
numNew int
numUpd int
verbose bool verbose bool
logger *log.Logger logger *log.Logger
logVerbose bool logVerbose bool
progress Progress progress Progress
total int
termWidth int termWidth int
fps *util.RateCalc fps *RateCalc
bps *util.RateCalc bps *RateCalc
} }
func (m *Main) log(text string) { func (m *Main) log(text string) {
m.logger.Println(time.Now().UTC().Format("2006-01-02 15:04:05"), text) m.logger.Println(time.Now().UTC().Format("2006-01-02 15:04:05"), text)
} }
func (m *Main) logStatus(stat chkbit.Status, message string) bool { func (m *Main) logStatus(stat chkbit.Status, path string) {
if stat == chkbit.STATUS_UPDATE_INDEX { if stat == chkbit.STATUS_UPDATE_INDEX {
return false m.numIdxUpd++
} } else {
if stat == chkbit.STATUS_ERR_DMG {
if stat == chkbit.STATUS_ERR_DMG { m.total++
m.dmgList = append(m.dmgList, message) m.dmgList = append(m.dmgList, path)
} else if stat == chkbit.STATUS_PANIC { } else if stat == chkbit.STATUS_PANIC {
m.errList = append(m.errList, message) m.errList = append(m.errList, path)
} } else if stat == chkbit.STATUS_OK || stat == chkbit.STATUS_UPDATE || stat == chkbit.STATUS_NEW {
m.total++
if m.logVerbose || !stat.IsVerbose() { if stat == chkbit.STATUS_UPDATE {
m.log(stat.String() + " " + message) m.numUpd++
} } else if stat == chkbit.STATUS_NEW {
m.numNew++
if m.verbose || !stat.IsVerbose() { }
col := "" }
if stat.IsErrorOrWarning() {
col = termAlertFG if m.logVerbose || stat != chkbit.STATUS_OK && stat != chkbit.STATUS_IGNORE {
m.log(stat.String() + " " + path)
}
if m.verbose || !stat.IsVerbose() {
col := ""
if stat.IsErrorOrWarning() {
col = termAlertFG
}
lterm.Printline(col, stat.String(), " ", path, lterm.Reset)
} }
lterm.Printline(col, stat.String(), " ", message, lterm.Reset)
return true
} }
return false
} }
func (m *Main) showStatus() { func (m *Main) showStatus(context *chkbit.Context) {
last := time.Now().Add(-updateInterval) last := time.Now().Add(-updateInterval)
stat := "" stat := ""
for { for {
select { select {
case item := <-m.context.LogQueue: case item := <-context.LogQueue:
if item == nil { if item == nil {
if m.progress == Fancy { if m.progress == Fancy {
lterm.Printline("") lterm.Printline("")
} }
return return
} }
if m.logStatus(item.Stat, item.Message) { m.logStatus(item.Stat, item.Message)
if m.progress == Fancy { if m.progress == Fancy {
lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r") lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r")
} else { } else {
fmt.Print(m.context.NumTotal, "\r") fmt.Print(m.total, "\r")
}
} }
case perf := <-m.context.PerfQueue: case perf := <-context.PerfQueue:
now := time.Now() now := time.Now()
m.fps.Push(now, perf.NumFiles) m.fps.Push(now, perf.NumFiles)
m.bps.Push(now, perf.NumBytes) m.bps.Push(now, perf.NumBytes)
@ -137,11 +140,11 @@ func (m *Main) showStatus() {
statF := fmt.Sprintf("%d files/s", m.fps.Last()) statF := fmt.Sprintf("%d files/s", m.fps.Last())
statB := fmt.Sprintf("%d MB/s", m.bps.Last()/sizeMB) statB := fmt.Sprintf("%d MB/s", m.bps.Last()/sizeMB)
stat = "RW" stat = "RW"
if !m.context.UpdateIndex { if !context.Update {
stat = "RO" stat = "RO"
} }
stat = fmt.Sprintf("[%s:%d] %5d files $ %s %-13s $ %s %-13s", stat = fmt.Sprintf("[%s:%d] %5d files $ %s %-13s $ %s %-13s",
stat, m.context.NumWorkers, m.context.NumTotal, stat, context.NumWorkers, m.total,
util.Sparkline(m.fps.Stats), statF, util.Sparkline(m.fps.Stats), statF,
util.Sparkline(m.bps.Stats), statB) util.Sparkline(m.bps.Stats), statB)
stat = util.LeftTruncate(stat, m.termWidth-1) stat = util.LeftTruncate(stat, m.termWidth-1)
@ -149,49 +152,38 @@ func (m *Main) showStatus() {
stat = strings.Replace(stat, "$", termSepFG+termSep+termFG3, 1) stat = strings.Replace(stat, "$", termSepFG+termSep+termFG3, 1)
lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r") lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r")
} else if m.progress == Plain { } else if m.progress == Plain {
fmt.Print(m.context.NumTotal, "\r") fmt.Print(m.total, "\r")
} }
} }
} }
} }
} }
func (m *Main) process() bool { func (m *Main) process() *chkbit.Context {
// verify mode if cli.Update && cli.ShowIgnoredOnly {
var b01 = map[bool]int8{false: 0, true: 1} fmt.Println("Error: use either --update or --show-ignored-only!")
if b01[cli.Check]+b01[cli.Update]+b01[cli.AddOnly]+b01[cli.ShowIgnoredOnly] > 1 { return nil
fmt.Println("Error: can only run one mode at a time!")
os.Exit(1)
} }
var err error context, err := chkbit.NewContext(cli.Workers, cli.Force, cli.Update, cli.ShowIgnoredOnly, cli.Algo, cli.SkipSymlinks, cli.IndexName, cli.IgnoreName)
m.context, err = chkbit.NewContext(cli.Workers, cli.Algo, cli.IndexName, cli.IgnoreName)
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
return false return nil
} }
m.context.ForceUpdateDmg = cli.Force
m.context.UpdateIndex = cli.Update || cli.AddOnly
m.context.AddOnly = cli.AddOnly
m.context.ShowIgnoredOnly = cli.ShowIgnoredOnly
m.context.ShowMissing = cli.ShowMissing
m.context.SkipSymlinks = cli.SkipSymlinks
m.context.SkipSubdirectories = cli.NoRecurse
m.context.TrackDirectories = !cli.NoDirInIndex
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
m.showStatus() m.showStatus(context)
}() }()
m.context.Start(cli.Paths) context.Start(cli.Paths)
wg.Wait() wg.Wait()
return true return context
} }
func (m *Main) printResult() { func (m *Main) printResult(context *chkbit.Context) {
cprint := func(col, text string) { cprint := func(col, text string) {
if m.progress != Quiet { if m.progress != Quiet {
if m.progress == Fancy { if m.progress == Fancy {
@ -214,14 +206,14 @@ func (m *Main) printResult() {
if m.progress != Quiet { if m.progress != Quiet {
mode := "" mode := ""
if !m.context.UpdateIndex { if !context.Update {
mode = " in readonly mode" mode = " in readonly mode"
} }
status := fmt.Sprintf("Processed %s%s.", util.LangNum1MutateSuffix(m.context.NumTotal, "file"), mode) status := fmt.Sprintf("Processed %s%s.", util.LangNum1MutateSuffix(m.total, "file"), mode)
cprint(termOKFG, status) cprint(termOKFG, status)
m.log(status) m.log(status)
if m.progress == Fancy && m.context.NumTotal > 0 { if m.progress == Fancy && m.total > 0 {
elapsed := time.Since(m.fps.Start) elapsed := time.Since(m.fps.Start)
elapsedS := elapsed.Seconds() elapsedS := elapsed.Seconds()
fmt.Println("-", elapsed.Truncate(time.Second), "elapsed") fmt.Println("-", elapsed.Truncate(time.Second), "elapsed")
@ -229,26 +221,17 @@ func (m *Main) printResult() {
fmt.Printf("- %.2f MB/second\n", (float64(m.bps.Total)+float64(m.bps.Current))/float64(sizeMB)/elapsedS) fmt.Printf("- %.2f MB/second\n", (float64(m.bps.Total)+float64(m.bps.Current))/float64(sizeMB)/elapsedS)
} }
del := "" if context.Update {
if m.context.UpdateIndex { if m.numIdxUpd > 0 {
if m.context.NumIdxUpd > 0 { cprint(termOKFG, fmt.Sprintf("- %s updated\n- %s added\n- %s updated",
if m.context.NumDel > 0 { util.LangNum1Choice(m.numIdxUpd, "directory was", "directories were"),
del = fmt.Sprintf("\n- %s been removed", util.LangNum1Choice(m.context.NumDel, "file/directory has", "files/directories have")) util.LangNum1Choice(m.numNew, "file hash was", "file hashes were"),
} util.LangNum1Choice(m.numUpd, "file hash was", "file hashes were")))
cprint(termOKFG, fmt.Sprintf("- %s updated\n- %s added\n- %s updated%s",
util.LangNum1Choice(m.context.NumIdxUpd, "directory was", "directories were"),
util.LangNum1Choice(m.context.NumNew, "file hash was", "file hashes were"),
util.LangNum1Choice(m.context.NumUpd, "file hash was", "file hashes were"),
del))
} }
} else if m.context.NumNew+m.context.NumUpd+m.context.NumDel > 0 { } else if m.numNew+m.numUpd > 0 {
if m.context.NumDel > 0 { cprint(termAlertFG, fmt.Sprintf("No changes were made (specify -u to update):\n- %s would have been added and\n- %s would have been updated.",
del = fmt.Sprintf("\n- %s would have been removed", util.LangNum1Choice(m.context.NumDel, "file/directory", "files/directories")) util.LangNum1MutateSuffix(m.numNew, "file"),
} util.LangNum1MutateSuffix(m.numUpd, "file")))
cprint(termAlertFG, fmt.Sprintf("No changes were made (specify -u to update):\n- %s would have been added\n- %s would have been updated%s",
util.LangNum1MutateSuffix(m.context.NumNew, "file"),
util.LangNum1MutateSuffix(m.context.NumUpd, "file"),
del))
} }
} }
@ -324,8 +307,9 @@ func (m *Main) run() {
if len(cli.Paths) > 0 { if len(cli.Paths) > 0 {
m.log("chkbit " + strings.Join(cli.Paths, ", ")) m.log("chkbit " + strings.Join(cli.Paths, ", "))
if m.process() && !m.context.ShowIgnoredOnly { context := m.process()
m.printResult() if context != nil && !context.ShowIgnoredOnly {
m.printResult(context)
} }
} else { } else {
fmt.Println("specify a path to check, see -h") fmt.Println("specify a path to check, see -h")
@ -344,8 +328,8 @@ func main() {
m := &Main{ m := &Main{
logger: log.New(io.Discard, "", 0), logger: log.New(io.Discard, "", 0),
termWidth: termWidth, termWidth: termWidth,
fps: util.NewRateCalc(time.Second, (termWidth-70)/2), fps: NewRateCalc(time.Second, (termWidth-70)/2),
bps: util.NewRateCalc(time.Second, (termWidth-70)/2), bps: NewRateCalc(time.Second, (termWidth-70)/2),
} }
m.run() m.run()
} }

View File

@ -1,4 +1,4 @@
package util package main
import ( import (
"time" "time"

View File

@ -8,32 +8,21 @@ import (
) )
type Context struct { type Context struct {
NumWorkers int NumWorkers int
UpdateIndex bool Force bool
AddOnly bool Update bool
ShowIgnoredOnly bool ShowIgnoredOnly bool
ShowMissing bool HashAlgo string
ForceUpdateDmg bool SkipSymlinks bool
HashAlgo string IndexFilename string
TrackDirectories bool IgnoreFilename string
SkipSymlinks bool WorkQueue chan *WorkItem
SkipSubdirectories bool LogQueue chan *LogEvent
IndexFilename string PerfQueue chan *PerfEvent
IgnoreFilename string wg sync.WaitGroup
WorkQueue chan *WorkItem
LogQueue chan *LogEvent
PerfQueue chan *PerfEvent
wg sync.WaitGroup
mutex sync.Mutex
NumTotal int
NumIdxUpd int
NumNew int
NumUpd int
NumDel int
} }
func NewContext(numWorkers int, hashAlgo string, indexFilename string, ignoreFilename string) (*Context, error) { func NewContext(numWorkers int, force bool, update bool, showIgnoredOnly bool, hashAlgo string, skipSymlinks bool, indexFilename string, ignoreFilename string) (*Context, error) {
if indexFilename[0] != '.' { if indexFilename[0] != '.' {
return nil, errors.New("The index filename must start with a dot!") return nil, errors.New("The index filename must start with a dot!")
} }
@ -44,44 +33,21 @@ func NewContext(numWorkers int, hashAlgo string, indexFilename string, ignoreFil
return nil, errors.New(hashAlgo + " is unknown.") return nil, errors.New(hashAlgo + " is unknown.")
} }
return &Context{ return &Context{
NumWorkers: numWorkers, NumWorkers: numWorkers,
HashAlgo: hashAlgo, Force: force,
IndexFilename: indexFilename, Update: update,
IgnoreFilename: ignoreFilename, ShowIgnoredOnly: showIgnoredOnly,
WorkQueue: make(chan *WorkItem, numWorkers*10), HashAlgo: hashAlgo,
LogQueue: make(chan *LogEvent, numWorkers*100), SkipSymlinks: skipSymlinks,
PerfQueue: make(chan *PerfEvent, numWorkers*10), IndexFilename: indexFilename,
IgnoreFilename: ignoreFilename,
WorkQueue: make(chan *WorkItem, numWorkers*10),
LogQueue: make(chan *LogEvent, numWorkers*100),
PerfQueue: make(chan *PerfEvent, numWorkers*10),
}, nil }, nil
} }
func (context *Context) log(stat Status, message string) { func (context *Context) log(stat Status, message string) {
context.mutex.Lock()
defer context.mutex.Unlock()
switch stat {
case STATUS_ERR_DMG:
context.NumTotal++
case STATUS_UPDATE_INDEX:
context.NumIdxUpd++
case STATUS_UP_WARN_OLD:
context.NumTotal++
context.NumUpd++
case STATUS_UPDATE:
context.NumTotal++
context.NumUpd++
case STATUS_NEW:
context.NumTotal++
context.NumNew++
case STATUS_OK:
if !context.AddOnly {
context.NumTotal++
}
case STATUS_MISSING:
context.NumDel++
//case STATUS_PANIC:
//case STATUS_ERR_IDX:
//case STATUS_IGNORE:
}
context.LogQueue <- &LogEvent{stat, message} context.LogQueue <- &LogEvent{stat, message}
} }
@ -97,8 +63,8 @@ func (context *Context) perfMonBytes(numBytes int64) {
context.PerfQueue <- &PerfEvent{0, numBytes} context.PerfQueue <- &PerfEvent{0, numBytes}
} }
func (context *Context) addWork(path string, filesToIndex []string, dirList []string, ignore *Ignore) { func (context *Context) addWork(path string, filesToIndex []string, ignore *Ignore) {
context.WorkQueue <- &WorkItem{path, filesToIndex, dirList, ignore} context.WorkQueue <- &WorkItem{path, filesToIndex, ignore}
} }
func (context *Context) endWork() { func (context *Context) endWork() {
@ -110,18 +76,12 @@ func (context *Context) isChkbitFile(name string) bool {
} }
func (context *Context) Start(pathList []string) { func (context *Context) Start(pathList []string) {
context.NumTotal = 0
context.NumIdxUpd = 0
context.NumNew = 0
context.NumUpd = 0
context.NumDel = 0
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(context.NumWorkers) wg.Add(context.NumWorkers)
for i := 0; i < context.NumWorkers; i++ { for i := 0; i < context.NumWorkers; i++ {
go func(id int) { go func(id int) {
defer wg.Done() defer wg.Done()
context.runWorker(id) context.RunWorker(id)
}(i) }(i)
} }
go func() { go func() {
@ -161,11 +121,6 @@ func (context *Context) scanDir(root string, parentIgnore *Ignore) {
var dirList []string var dirList []string
var filesToIndex []string var filesToIndex []string
ignore, err := GetIgnore(context, root, parentIgnore)
if err != nil {
context.logErr(root+"/", err)
}
for _, file := range files { for _, file := range files {
path := filepath.Join(root, file.Name()) path := filepath.Join(root, file.Name())
if file.Name()[0] == '.' { if file.Name()[0] == '.' {
@ -175,21 +130,24 @@ func (context *Context) scanDir(root string, parentIgnore *Ignore) {
continue continue
} }
if isDir(file, path) { if isDir(file, path) {
if !ignore.shouldIgnore(file.Name()) { dirList = append(dirList, file.Name())
dirList = append(dirList, file.Name())
} else {
context.log(STATUS_IGNORE, file.Name()+"/")
}
} else if file.Type().IsRegular() { } else if file.Type().IsRegular() {
filesToIndex = append(filesToIndex, file.Name()) filesToIndex = append(filesToIndex, file.Name())
} }
} }
context.addWork(root, filesToIndex, dirList, ignore) ignore, err := GetIgnore(context, root, parentIgnore)
if err != nil {
context.logErr(root+"/", err)
}
if !context.SkipSubdirectories { context.addWork(root, filesToIndex, ignore)
for _, name := range dirList {
for _, name := range dirList {
if !ignore.shouldIgnore(name) {
context.scanDir(filepath.Join(root, name), ignore) context.scanDir(filepath.Join(root, name), ignore)
} else {
context.log(STATUS_IGNORE, name+"/")
} }
} }
} }

2
go.mod
View File

@ -1,4 +1,4 @@
module github.com/laktak/chkbit/v5 module github.com/laktak/chkbit
go 1.22.3 go 1.22.3

View File

@ -50,7 +50,7 @@ func Hashfile(path string, hashAlgo string, perfMonBytes func(int64)) (string, e
return hex.EncodeToString(h.Sum(nil)), nil return hex.EncodeToString(h.Sum(nil)), nil
} }
func hashMd5(data []byte) string { func HashMd5(data []byte) string {
h := md5.New() h := md5.New()
h.Write(data) h.Write(data)
return hex.EncodeToString(h.Sum(nil)) return hex.EncodeToString(h.Sum(nil))

155
index.go
View File

@ -5,7 +5,6 @@ import (
"errors" "errors"
"os" "os"
"path/filepath" "path/filepath"
"slices"
) )
const VERSION = 2 // index version const VERSION = 2 // index version
@ -13,54 +12,47 @@ var (
algoMd5 = "md5" algoMd5 = "md5"
) )
type idxInfo struct { type IdxInfo struct {
ModTime int64 `json:"mod"` ModTime int64 `json:"mod"`
Algo *string `json:"a,omitempty"` Algo *string `json:"a,omitempty"`
Hash *string `json:"h,omitempty"` Hash *string `json:"h,omitempty"`
LegacyHash *string `json:"md5,omitempty"` LegacyHash *string `json:"md5,omitempty"`
} }
type indexFile struct { type IndexFile struct {
V int `json:"v"` V int `json:"v"`
// IdxRaw -> map[string]idxInfo
IdxRaw json.RawMessage `json:"idx"` IdxRaw json.RawMessage `json:"idx"`
IdxHash string `json:"idx_hash"` IdxHash string `json:"idx_hash"`
// 2024-08 optional, list of subdirectories
Dir []string `json:"dirlist,omitempty"`
} }
type idxInfo1 struct { type IdxInfo1 struct {
ModTime int64 `json:"mod"` ModTime int64 `json:"mod"`
Hash string `json:"md5"` Hash string `json:"md5"`
} }
type indexFile1 struct { type IndexFile1 struct {
Data map[string]idxInfo1 `json:"data"` Data map[string]IdxInfo1 `json:"data"`
} }
type Index struct { type Index struct {
context *Context context *Context
path string path string
files []string files []string
cur map[string]idxInfo cur map[string]IdxInfo
new map[string]idxInfo new map[string]IdxInfo
curDirList []string updates []string
newDirList []string modified bool
modified bool readonly bool
readonly bool
} }
func newIndex(context *Context, path string, files []string, dirList []string, readonly bool) *Index { func NewIndex(context *Context, path string, files []string, readonly bool) *Index {
slices.Sort(dirList)
return &Index{ return &Index{
context: context, context: context,
path: path, path: path,
files: files, files: files,
cur: make(map[string]idxInfo), cur: make(map[string]IdxInfo),
new: make(map[string]idxInfo), new: make(map[string]IdxInfo),
curDirList: make([]string, 0), readonly: readonly,
newDirList: dirList,
readonly: readonly,
} }
} }
@ -68,6 +60,10 @@ func (i *Index) getIndexFilepath() string {
return filepath.Join(i.path, i.context.IndexFilename) return filepath.Join(i.path, i.context.IndexFilename)
} }
func (i *Index) setMod(value bool) {
i.modified = value
}
func (i *Index) logFilePanic(name string, message string) { func (i *Index) logFilePanic(name string, message string) {
i.context.log(STATUS_PANIC, filepath.Join(i.path, name)+": "+message) i.context.log(STATUS_PANIC, filepath.Join(i.path, name)+": "+message)
} }
@ -76,10 +72,6 @@ func (i *Index) logFile(stat Status, name string) {
i.context.log(stat, filepath.Join(i.path, name)) i.context.log(stat, filepath.Join(i.path, name))
} }
func (i *Index) logDir(stat Status, name string) {
i.context.log(stat, filepath.Join(i.path, name)+"/")
}
func (i *Index) calcHashes(ignore *Ignore) { func (i *Index) calcHashes(ignore *Ignore) {
for _, name := range i.files { for _, name := range i.files {
if ignore != nil && ignore.shouldIgnore(name) { if ignore != nil && ignore.shouldIgnore(name) {
@ -88,13 +80,13 @@ func (i *Index) calcHashes(ignore *Ignore) {
} }
var err error var err error
var info *idxInfo var info *IdxInfo
algo := i.context.HashAlgo algo := i.context.HashAlgo
if val, ok := i.cur[name]; ok { if val, ok := i.cur[name]; ok {
// existing file // existing
if val.LegacyHash != nil { if val.LegacyHash != nil {
// convert from py1 to new format // convert from py1 to new format
val = idxInfo{ val = IdxInfo{
ModTime: val.ModTime, ModTime: val.ModTime,
Algo: &algoMd5, Algo: &algoMd5,
Hash: val.LegacyHash, Hash: val.LegacyHash,
@ -104,15 +96,10 @@ func (i *Index) calcHashes(ignore *Ignore) {
if val.Algo != nil { if val.Algo != nil {
algo = *val.Algo algo = *val.Algo
} }
if i.context.AddOnly { info, err = i.calcFile(name, algo)
info = &val
} else {
info, err = i.calcFile(name, algo)
}
} else { } else {
// new file
if i.readonly { if i.readonly {
info = &idxInfo{Algo: &algo} info = &IdxInfo{Algo: &algo}
} else { } else {
info, err = i.calcFile(name, algo) info, err = i.calcFile(name, algo)
} }
@ -133,70 +120,42 @@ func (i *Index) showIgnoredOnly(ignore *Ignore) {
} }
} }
func (i *Index) checkFix(forceUpdateDmg bool) { func (i *Index) checkFix(force bool) {
for name, b := range i.new { for name, b := range i.new {
if a, ok := i.cur[name]; !ok { if a, ok := i.cur[name]; !ok {
i.logFile(STATUS_NEW, name) i.logFile(STATUS_NEW, name)
i.modified = true i.setMod(true)
continue
} else { } else {
amod := int64(a.ModTime) amod := int64(a.ModTime)
bmod := int64(b.ModTime) bmod := int64(b.ModTime)
if a.Hash != nil && b.Hash != nil && *a.Hash == *b.Hash { if a.Hash != nil && b.Hash != nil && *a.Hash == *b.Hash {
i.logFile(STATUS_OK, name) i.logFile(STATUS_OK, name)
if amod != bmod { if amod != bmod {
i.modified = true i.setMod(true)
} }
continue continue
} }
if amod == bmod { if amod == bmod {
i.logFile(STATUS_ERR_DMG, name) i.logFile(STATUS_ERR_DMG, name)
if !forceUpdateDmg { if !force {
// keep DMG entry
i.new[name] = a i.new[name] = a
} else { } else {
i.modified = true i.setMod(true)
} }
} else if amod < bmod { } else if amod < bmod {
i.logFile(STATUS_UPDATE, name) i.logFile(STATUS_UPDATE, name)
i.modified = true i.setMod(true)
} else if amod > bmod { } else if amod > bmod {
i.logFile(STATUS_UP_WARN_OLD, name) i.logFile(STATUS_WARN_OLD, name)
i.modified = true i.setMod(true)
} }
} }
} }
// track missing
for name := range i.cur {
if _, ok := i.new[name]; !ok {
i.modified = true
if i.context.ShowMissing {
i.logFile(STATUS_MISSING, name)
}
}
}
// dirs
m := make(map[string]bool)
for _, n := range i.newDirList {
m[n] = true
}
for _, name := range i.curDirList {
if !m[name] {
i.modified = true
if i.context.ShowMissing {
i.logDir(STATUS_MISSING, name+"/")
}
}
}
if len(i.newDirList) != len(i.curDirList) {
// added
i.modified = true
}
} }
func (i *Index) calcFile(name string, a string) (*idxInfo, error) { func (i *Index) calcFile(name string, a string) (*IdxInfo, error) {
path := filepath.Join(i.path, name) path := filepath.Join(i.path, name)
info, _ := os.Stat(path) info, _ := os.Stat(path)
mtime := int64(info.ModTime().UnixNano() / 1e6) mtime := int64(info.ModTime().UnixNano() / 1e6)
@ -205,7 +164,7 @@ func (i *Index) calcFile(name string, a string) (*idxInfo, error) {
return nil, err return nil, err
} }
i.context.perfMonFiles(1) i.context.perfMonFiles(1)
return &idxInfo{ return &IdxInfo{
ModTime: mtime, ModTime: mtime,
Algo: &a, Algo: &a,
Hash: &h, Hash: &h,
@ -222,13 +181,10 @@ func (i *Index) save() (bool, error) {
if err != nil { if err != nil {
return false, err return false, err
} }
data := indexFile{ data := IndexFile{
V: VERSION, V: VERSION,
IdxRaw: text, IdxRaw: text,
IdxHash: hashMd5(text), IdxHash: HashMd5(text),
}
if i.context.TrackDirectories {
data.Dir = i.newDirList
} }
file, err := json.Marshal(data) file, err := json.Marshal(data)
@ -239,7 +195,7 @@ func (i *Index) save() (bool, error) {
if err != nil { if err != nil {
return false, err return false, err
} }
i.modified = false i.setMod(false)
return true, nil return true, nil
} else { } else {
return false, nil return false, nil
@ -253,12 +209,12 @@ func (i *Index) load() error {
} }
return err return err
} }
i.modified = false i.setMod(false)
file, err := os.ReadFile(i.getIndexFilepath()) file, err := os.ReadFile(i.getIndexFilepath())
if err != nil { if err != nil {
return err return err
} }
var data indexFile var data IndexFile
err = json.Unmarshal(file, &data) err = json.Unmarshal(file, &data)
if err != nil { if err != nil {
return err return err
@ -269,22 +225,22 @@ func (i *Index) load() error {
return err return err
} }
text := data.IdxRaw text := data.IdxRaw
if data.IdxHash != hashMd5(text) { if data.IdxHash != HashMd5(text) {
// old versions may have saved the JSON encoded with extra spaces // old versions may have save the JSON encoded with extra spaces
text, _ = json.Marshal(data.IdxRaw) text, _ = json.Marshal(data.IdxRaw)
} else { } else {
} }
if data.IdxHash != hashMd5(text) { if data.IdxHash != HashMd5(text) {
i.modified = true i.setMod(true)
i.logFile(STATUS_ERR_IDX, i.getIndexFilepath()) i.logFile(STATUS_ERR_IDX, i.getIndexFilepath())
} }
} else { } else {
var data1 indexFile1 var data1 IndexFile1
json.Unmarshal(file, &data1) json.Unmarshal(file, &data1)
if data1.Data != nil { if data1.Data != nil {
// convert from js to new format // convert from js to new format
for name, item := range data1.Data { for name, item := range data1.Data {
i.cur[name] = idxInfo{ i.cur[name] = IdxInfo{
ModTime: item.ModTime, ModTime: item.ModTime,
Algo: &algoMd5, Algo: &algoMd5,
Hash: &item.Hash, Hash: &item.Hash,
@ -292,12 +248,5 @@ func (i *Index) load() error {
} }
} }
} }
// dirs
if data.Dir != nil {
slices.Sort(data.Dir)
i.curDirList = data.Dir
}
return nil return nil
} }

View File

@ -1,342 +0,0 @@
package main
import (
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"testing"
"time"
)
// perform integration test using the compiled binary
var testDir = "/tmp/chkbit"
func getCmd() string {
_, filename, _, _ := runtime.Caller(0)
prjRoot := filepath.Dir(filepath.Dir(filename))
return filepath.Join(prjRoot, "chkbit")
}
func checkOut(t *testing.T, sout string, expected string) {
if !strings.Contains(sout, expected) {
t.Errorf("Expected '%s' in output, got '%s'\n", expected, sout)
}
}
func checkNotOut(t *testing.T, sout string, notExpected string) {
if strings.Contains(sout, notExpected) {
t.Errorf("Did not expect '%s' in output, got '%s'\n", notExpected, sout)
}
}
// misc files
var (
startList = []string{"time", "year", "people", "way", "day", "thing"}
wordList = []string{"life", "world", "school", "state", "family", "student", "group", "country", "problem", "hand", "part", "place", "case", "week", "company", "system", "program", "work", "government", "number", "night", "point", "home", "water", "room", "mother", "area", "money", "story", "fact", "month", "lot", "right", "study", "book", "eye", "job", "word", "business", "issue", "side", "kind", "head", "house", "service", "friend", "father", "power", "hour", "game", "line", "end", "member", "law", "car", "city", "community", "name", "president", "team", "minute", "idea", "kid", "body", "information", "back", "face", "others", "level", "office", "door", "health", "person", "art", "war", "history", "party", "result", "change", "morning", "reason", "research", "moment", "air", "teacher", "force", "education"}
extList = []string{"txt", "md", "pdf", "jpg", "jpeg", "png", "mp4", "mp3", "csv"}
startDate = time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC)
endDate = time.Date(2024, 12, 1, 0, 0, 0, 0, time.UTC)
dateList = []time.Time{}
wordIdx = 0
extIdx = 0
dateIdx = 0
)
func nextWord() string {
word := wordList[wordIdx%len(wordList)]
wordIdx++
return word
}
func nextExt() string {
ext := extList[extIdx%len(extList)]
extIdx++
return ext
}
func setDate(filename string, r int) {
date := dateList[dateIdx%len(dateList)]
m := 17 * dateIdx / len(dateList)
date = date.Add(time.Duration(m) * time.Hour)
dateIdx++
os.Chtimes(filename, date, date)
}
func genFile(path string, size int) {
os.WriteFile(path, make([]byte, size), 0644)
setDate(path, size*size)
}
func genFiles(dir string, a int) {
os.MkdirAll(dir, 0755)
for i := 1; i <= 5; i++ {
size := a*i*wordIdx*100 + extIdx
file := nextWord() + "-" + nextWord()
if i%3 == 0 {
file += "-" + nextWord()
}
file += "." + nextExt()
genFile(filepath.Join(dir, file), size)
}
}
func genDir(root string) {
for _, start := range startList {
for i := 1; i <= 5; i++ {
dir := filepath.Join(root, start, nextWord())
genFiles(dir, 1)
if wordIdx%3 == 0 {
dir = filepath.Join(dir, nextWord())
genFiles(dir, 1)
}
}
}
}
func setupMiscFiles() {
var c int64 = 50
interval := (int64)(endDate.Sub(startDate).Seconds()) / c
for i := range make([]int64, c) {
dateList = append(dateList, startDate.Add(time.Duration(interval*(int64)(i))*time.Second))
}
root := filepath.Join(testDir, "root")
if err := os.RemoveAll(testDir); err != nil {
fmt.Println("Failed to clean", err)
panic(err)
}
genDir(root)
os.MkdirAll(filepath.Join(root, "day/car/empty"), 0755)
rootPeople := filepath.Join(root, "people")
testPeople := filepath.Join(testDir, "people")
err := os.Rename(rootPeople, testPeople)
if err != nil {
fmt.Println("Rename failed", err)
panic(err)
}
err = os.Symlink(testPeople, rootPeople)
if err != nil {
fmt.Println("Symlink failed", err)
panic(err)
}
}
func TestRoot(t *testing.T) {
setupMiscFiles()
tool := getCmd()
root := filepath.Join(testDir, "root")
// update index, no recourse
t.Run("no-recourse", func(t *testing.T) {
cmd := exec.Command(tool, "-umR", filepath.Join(root, "day/office"))
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 5 files")
checkOut(t, sout, "- 1 directory was updated")
checkOut(t, sout, "- 5 file hashes were added")
checkOut(t, sout, "- 0 file hashes were updated")
checkNotOut(t, sout, "removed")
})
// update remaining index from root
t.Run("update-remaining", func(t *testing.T) {
cmd := exec.Command(tool, "-um", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 300 files")
checkOut(t, sout, "- 66 directories were updated")
checkOut(t, sout, "- 295 file hashes were added")
checkOut(t, sout, "- 0 file hashes were updated")
checkNotOut(t, sout, "removed")
})
// delete files, check for missing
t.Run("delete", func(t *testing.T) {
os.RemoveAll(filepath.Join(root, "thing/change"))
os.Remove(filepath.Join(root, "time/hour/minute/body-information.csv"))
cmd := exec.Command(tool, "-m", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "del /tmp/chkbit/root/thing/change/")
checkOut(t, sout, "2 files/directories would have been removed")
})
// do not report missing without -m
t.Run("no-missing", func(t *testing.T) {
cmd := exec.Command(tool, root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkNotOut(t, sout, "del ")
checkNotOut(t, sout, "removed")
})
// check for missing and update
t.Run("missing", func(t *testing.T) {
cmd := exec.Command(tool, "-um", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "del /tmp/chkbit/root/thing/change/")
checkOut(t, sout, "2 files/directories have been removed")
})
// check again
t.Run("repeat", func(t *testing.T) {
for i := 0; i < 10; i++ {
cmd := exec.Command(tool, "-uv", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 289 files")
checkNotOut(t, sout, "removed")
checkNotOut(t, sout, "updated")
checkNotOut(t, sout, "added")
}
})
// add files only
t.Run("add-only", func(t *testing.T) {
genFiles(filepath.Join(root, "way/add"), 99)
genFile(filepath.Join(root, "time/add-file.txt"), 500)
// modify existing, will not be reported:
genFile(filepath.Join(root, "way/job/word-business.mp3"), 500)
cmd := exec.Command(tool, "-a", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 6 files")
checkOut(t, sout, "- 3 directories were updated")
checkOut(t, sout, "- 6 file hashes were added")
checkOut(t, sout, "- 0 file hashes were updated")
})
// update remaining
t.Run("update-remaining-add", func(t *testing.T) {
cmd := exec.Command(tool, "-u", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 295 files")
checkOut(t, sout, "- 1 directory was updated")
checkOut(t, sout, "- 0 file hashes were added")
checkOut(t, sout, "- 1 file hash was updated")
})
}
func TestDMG(t *testing.T) {
testDmg := filepath.Join(testDir, "test_dmg")
if err := os.RemoveAll(testDmg); err != nil {
fmt.Println("Failed to clean", err)
panic(err)
}
if err := os.MkdirAll(testDmg, 0755); err != nil {
fmt.Println("Failed to create test directory", err)
panic(err)
}
if err := os.Chdir(testDmg); err != nil {
fmt.Println("Failed to cd test directory", err)
panic(err)
}
tool := getCmd()
testFile := filepath.Join(testDmg, "test.txt")
t1, _ := time.Parse(time.RFC3339, "2022-02-01T11:00:00Z")
t2, _ := time.Parse(time.RFC3339, "2022-02-01T12:00:00Z")
t3, _ := time.Parse(time.RFC3339, "2022-02-01T13:00:00Z")
// create test and set the modified time"
t.Run("create", func(t *testing.T) {
os.WriteFile(testFile, []byte("foo1"), 0644)
os.Chtimes(testFile, t2, t2)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("failed with '%s'\n", err)
} else {
checkOut(t, string(out), "new test.txt")
}
})
// update test with different content & old modified (expect 'old')"
t.Run("expect-old", func(t *testing.T) {
os.WriteFile(testFile, []byte("foo2"), 0644)
os.Chtimes(testFile, t1, t1)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("failed with '%s'\n", err)
} else {
checkOut(t, string(out), "old test.txt")
}
})
// update test & new modified (expect 'upd')"
t.Run("expect-upd", func(t *testing.T) {
os.WriteFile(testFile, []byte("foo3"), 0644)
os.Chtimes(testFile, t3, t3)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("failed with '%s'\n", err)
} else {
checkOut(t, string(out), "upd test.txt")
}
})
// Now update test with the same modified to simulate damage (expect DMG)"
t.Run("expect-DMG", func(t *testing.T) {
os.WriteFile(testFile, []byte("foo4"), 0644)
os.Chtimes(testFile, t3, t3)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
if cmd.ProcessState.ExitCode() != 1 {
t.Fatalf("expected to fail with exit code 1 vs %d!", cmd.ProcessState.ExitCode())
}
checkOut(t, string(out), "DMG test.txt")
} else {
t.Fatal("expected to fail!")
}
})
}

13
scripts/run_test_prep Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash
export TZ='UTC'
root="/tmp/chkbit"
go run scripts/run_test_prep.go
cd $root/root
mv $root/root/people $root/people
ln -s ../people people
ln -s ../../people/face/office-door.pdf day/friend/office-door.pdf
find -L | wc -l

89
scripts/run_test_prep.go Normal file
View File

@ -0,0 +1,89 @@
package main
import (
"fmt"
"os"
"path/filepath"
"time"
)
var (
startList = []string{"time", "year", "people", "way", "day", "thing"}
wordList = []string{"life", "world", "school", "state", "family", "student", "group", "country", "problem", "hand", "part", "place", "case", "week", "company", "system", "program", "work", "government", "number", "night", "point", "home", "water", "room", "mother", "area", "money", "story", "fact", "month", "lot", "right", "study", "book", "eye", "job", "word", "business", "issue", "side", "kind", "head", "house", "service", "friend", "father", "power", "hour", "game", "line", "end", "member", "law", "car", "city", "community", "name", "president", "team", "minute", "idea", "kid", "body", "information", "back", "face", "others", "level", "office", "door", "health", "person", "art", "war", "history", "party", "result", "change", "morning", "reason", "research", "moment", "air", "teacher", "force", "education"}
extList = []string{"txt", "md", "pdf", "jpg", "jpeg", "png", "mp4", "mp3", "csv"}
startDate = time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC)
endDate = time.Date(2024, 12, 1, 0, 0, 0, 0, time.UTC)
dateList = []time.Time{}
wordIdx = 0
extIdx = 0
dateIdx = 0
)
func nextWord() string {
word := wordList[wordIdx%len(wordList)]
wordIdx++
return word
}
func nextExt() string {
ext := extList[extIdx%len(extList)]
extIdx++
return ext
}
func setDate(filename string, r int) {
date := dateList[dateIdx%len(dateList)]
m := 17 * dateIdx / len(dateList)
date = date.Add(time.Duration(m) * time.Hour)
dateIdx++
os.Chtimes(filename, date, date)
}
func genFile(dir string, a int) {
os.MkdirAll(dir, 0755)
for i := 1; i <= 5; i++ {
size := a*i*wordIdx*100 + extIdx
file := nextWord() + "-" + nextWord()
if i%3 == 0 {
file += "-" + nextWord()
}
file += "." + nextExt()
path := filepath.Join(dir, file)
os.WriteFile(path, make([]byte, size), 0644)
setDate(path, size*size)
}
}
func genDir(root string) {
for _, start := range startList {
for i := 1; i <= 5; i++ {
dir := filepath.Join(root, start, nextWord())
genFile(dir, 1)
if wordIdx%3 == 0 {
dir = filepath.Join(dir, nextWord())
genFile(dir, 1)
}
}
}
}
func main() {
root := "/tmp/chkbit"
var c int64 = 50
interval := (int64)(endDate.Sub(startDate).Seconds()) / c
for i := range make([]int64, c) {
dateList = append(dateList, startDate.Add(time.Duration(interval*(int64)(i))*time.Second))
}
if err := os.RemoveAll(root); err == nil {
genDir(filepath.Join(root, "root"))
fmt.Println("Ready.")
} else {
fmt.Println("Failed to clean")
}
}

21
scripts/run_tests Executable file
View File

@ -0,0 +1,21 @@
#!/bin/bash
set -e
export TZ='UTC'
script_dir=$(dirname "$(realpath "$0")")
base_dir=$(dirname "$script_dir")
#dir=$(realpath "$script_dir/../testdata/run_test")
root="/tmp/chkbit/root"
if [[ ! -d $root ]]; then
echo "must run run_test_prep first"
exit 1
fi
# setup
$script_dir/build
"$base_dir/chkbit" -u /tmp/chkbit
# todo: validate

View File

@ -4,8 +4,4 @@ set -e
script_dir=$(dirname "$(realpath "$0")") script_dir=$(dirname "$(realpath "$0")")
cd $script_dir/.. cd $script_dir/..
# prep go test -v ./cmd/chkbit/util
$script_dir/build
go test -v ./cmd/chkbit/util -count=1
go test -v ./scripts -count=1

View File

@ -4,15 +4,14 @@ type Status string
const ( const (
STATUS_PANIC Status = "EXC" STATUS_PANIC Status = "EXC"
STATUS_ERR_IDX Status = "EIX"
STATUS_ERR_DMG Status = "DMG" STATUS_ERR_DMG Status = "DMG"
STATUS_UPDATE_INDEX Status = "iup" STATUS_ERR_IDX Status = "EIX"
STATUS_UP_WARN_OLD Status = "old" STATUS_WARN_OLD Status = "old"
STATUS_UPDATE Status = "upd"
STATUS_NEW Status = "new" STATUS_NEW Status = "new"
STATUS_UPDATE Status = "upd"
STATUS_OK Status = "ok " STATUS_OK Status = "ok "
STATUS_IGNORE Status = "ign" STATUS_IGNORE Status = "ign"
STATUS_MISSING Status = "del" STATUS_UPDATE_INDEX Status = "iup"
) )
func (s Status) String() string { func (s Status) String() string {
@ -20,7 +19,7 @@ func (s Status) String() string {
} }
func (s Status) IsErrorOrWarning() bool { func (s Status) IsErrorOrWarning() bool {
return s == STATUS_PANIC || s == STATUS_ERR_DMG || s == STATUS_ERR_IDX || s == STATUS_UP_WARN_OLD return s == STATUS_PANIC || s == STATUS_ERR_DMG || s == STATUS_ERR_IDX || s == STATUS_WARN_OLD
} }
func (s Status) IsVerbose() bool { func (s Status) IsVerbose() bool {

View File

@ -3,18 +3,17 @@ package chkbit
type WorkItem struct { type WorkItem struct {
path string path string
filesToIndex []string filesToIndex []string
dirList []string
ignore *Ignore ignore *Ignore
} }
func (context *Context) runWorker(id int) { func (context *Context) RunWorker(id int) {
for { for {
item := <-context.WorkQueue item := <-context.WorkQueue
if item == nil { if item == nil {
break break
} }
index := newIndex(context, item.path, item.filesToIndex, item.dirList, !context.UpdateIndex) index := NewIndex(context, item.path, item.filesToIndex, !context.Update)
err := index.load() err := index.load()
if err != nil { if err != nil {
context.log(STATUS_PANIC, index.getIndexFilepath()+": "+err.Error()) context.log(STATUS_PANIC, index.getIndexFilepath()+": "+err.Error())
@ -24,9 +23,9 @@ func (context *Context) runWorker(id int) {
index.showIgnoredOnly(item.ignore) index.showIgnoredOnly(item.ignore)
} else { } else {
index.calcHashes(item.ignore) index.calcHashes(item.ignore)
index.checkFix(context.ForceUpdateDmg) index.checkFix(context.Force)
if context.UpdateIndex { if context.Update {
if changed, err := index.save(); err != nil { if changed, err := index.save(); err != nil {
context.logErr(item.path, err) context.logErr(item.path, err)
} else if changed { } else if changed {