Compare commits

..

No commits in common. "master" and "v5.0.1" have entirely different histories.

12 changed files with 251 additions and 519 deletions

View File

@ -8,10 +8,10 @@ chkbit is a tool that ensures the safety of your files by checking if their *dat
- [How it works](#how-it-works) - [How it works](#how-it-works)
- [Installation](#installation) - [Installation](#installation)
- [chkbit as a Go module](#chkbit-as-a-go-module)
- [Usage](#usage) - [Usage](#usage)
- [Repair](#repair) - [Repair](#repair)
- [Ignore files](#ignore-files) - [Ignore files](#ignore-files)
- [chkbit as a Go module](#chkbit-as-a-go-module)
- [FAQ](#faq) - [FAQ](#faq)
@ -50,7 +50,7 @@ Building from the source requires Go.
- Either install it directly - Either install it directly
```shell ```shell
go install github.com/laktak/chkbit/v5/cmd/chkbit@latest go install github.com/laktak/chkbit@latest
``` ```
- or clone and build - or clone and build
@ -58,11 +58,22 @@ go install github.com/laktak/chkbit/v5/cmd/chkbit@latest
```shell ```shell
git clone https://github.com/laktak/chkbit git clone https://github.com/laktak/chkbit
chkbit/scripts/build chkbit/scripts/build
# binary: # output is here:
ls -l chkbit/chkbit chkbit/chkbit
``` ```
## chkbit as a Go module
chkbit is can also be used in other Go programs.
```
go get github.com/laktak/chkbit
```
For more information see the linked documentation on [pkg.go.dev](https://pkg.go.dev/github.com/laktak/chkbit).
## Usage ## Usage
Run `chkbit -u PATH` to create/update the chkbit index. Run `chkbit -u PATH` to create/update the chkbit index.
@ -84,18 +95,13 @@ Arguments:
Flags: Flags:
-h, --help Show context-sensitive help. -h, --help Show context-sensitive help.
-H, --tips Show tips. -H, --tips Show tips.
-c, --check check mode: chkbit will verify files in readonly mode (default mode) -u, --update update indices (without this chkbit will verify files in readonly mode)
-u, --update update mode: add and update indices --show-ignored-only only show ignored files
-a, --add-only add mode: only add new files, do not check existing (quicker) --algo="blake3" hash algorithm: md5, sha512, blake3 (default: blake3)
-i, --show-ignored-only show-ignored mode: only show ignored files -f, --force force update of damaged items
-m, --show-missing show missing files/directories -s, --skip-symlinks do not follow symlinks
--force force update of damaged items (advanced usage only)
-S, --skip-symlinks do not follow symlinks
-R, --no-recurse do not recurse into subdirectories
-D, --no-dir-in-index do not track directories in the index
-l, --log-file=STRING write to a logfile if specified -l, --log-file=STRING write to a logfile if specified
--log-verbose verbose logging --log-verbose verbose logging
--algo="blake3" hash algorithm: md5, sha512, blake3 (default: blake3)
--index-name=".chkbit" filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit) --index-name=".chkbit" filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit)
--ignore-name=".chkbitignore" filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore) --ignore-name=".chkbitignore" filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore)
-w, --workers=5 number of workers to use (default: 5) -w, --workers=5 number of workers to use (default: 5)
@ -105,27 +111,6 @@ Flags:
-V, --version show version information -V, --version show version information
``` ```
```
$ chkbit -H
.chkbitignore rules:
each line should contain exactly one name
you may use Unix shell-style wildcards (see README)
lines starting with '#' are skipped
lines starting with '/' are only applied to the current directory
Status codes:
DMG: error, data damage detected
EIX: error, index damaged
old: warning, file replaced by an older version
new: new file
upd: file updated
ok : check ok
del: file/directory removed
ign: ignored (see .chkbitignore)
EXC: exception/panic
```
chkbit is set to use only 5 workers by default so it will not slow your system to a crawl. You can specify a higher number to make it a lot faster if the IO throughput can also keep up. chkbit is set to use only 5 workers by default so it will not slow your system to a crawl. You can specify a higher number to make it a lot faster if the IO throughput can also keep up.
@ -155,17 +140,6 @@ Add a `.chkbitignore` file containing the names of the files/directories you wis
- hidden files (starting with a `.`) are ignored by default - hidden files (starting with a `.`) are ignored by default
## chkbit as a Go module
chkbit is can also be used in other Go programs.
```
go get github.com/laktak/chkbit/v5
```
For more information see the documentation on [pkg.go.dev](https://pkg.go.dev/github.com/laktak/chkbit/v5).
## FAQ ## FAQ
### Should I run `chkbit` on my whole drive? ### Should I run `chkbit` on my whole drive?

View File

@ -1,6 +1,6 @@
package main package main
var headerHelp = `Checks the data integrity of your files. var headerHelp = `Checks the data integrity of your files.
For help tips run "chkbit -H" or go to For help tips run "chkbit -H" or go to
https://github.com/laktak/chkbit https://github.com/laktak/chkbit
` `
@ -19,7 +19,6 @@ Status codes:
new: new file new: new file
upd: file updated upd: file updated
ok : check ok ok : check ok
del: file/directory removed
ign: ignored (see .chkbitignore) ign: ignored (see .chkbitignore)
EXC: exception/panic EXC: exception/panic
` `

View File

@ -10,8 +10,8 @@ import (
"time" "time"
"github.com/alecthomas/kong" "github.com/alecthomas/kong"
"github.com/laktak/chkbit/v5" "github.com/laktak/chkbit"
"github.com/laktak/chkbit/v5/cmd/chkbit/util" "github.com/laktak/chkbit/cmd/chkbit/util"
"github.com/laktak/lterm" "github.com/laktak/lterm"
) )
@ -44,18 +44,13 @@ var (
var cli struct { var cli struct {
Paths []string `arg:"" optional:"" name:"paths" help:"directories to check"` Paths []string `arg:"" optional:"" name:"paths" help:"directories to check"`
Tips bool `short:"H" help:"Show tips."` Tips bool `short:"H" help:"Show tips."`
Check bool `short:"c" help:"check mode: chkbit will verify files in readonly mode (default mode)"` Update bool `short:"u" help:"update indices (without this chkbit will verify files in readonly mode)"`
Update bool `short:"u" help:"update mode: add and update indices"` ShowIgnoredOnly bool `help:"only show ignored files"`
AddOnly bool `short:"a" help:"add mode: only add new files, do not check existing (quicker)"` Algo string `default:"blake3" help:"hash algorithm: md5, sha512, blake3 (default: blake3)"`
ShowIgnoredOnly bool `short:"i" help:"show-ignored mode: only show ignored files"` Force bool `short:"f" help:"force update of damaged items"`
ShowMissing bool `short:"m" help:"show missing files/directories"` SkipSymlinks bool `short:"s" help:"do not follow symlinks"`
Force bool `help:"force update of damaged items (advanced usage only)"`
SkipSymlinks bool `short:"S" help:"do not follow symlinks"`
NoRecurse bool `short:"R" help:"do not recurse into subdirectories"`
NoDirInIndex bool `short:"D" help:"do not track directories in the index"`
LogFile string `short:"l" help:"write to a logfile if specified"` LogFile string `short:"l" help:"write to a logfile if specified"`
LogVerbose bool `help:"verbose logging"` LogVerbose bool `help:"verbose logging"`
Algo string `default:"blake3" help:"hash algorithm: md5, sha512, blake3 (default: blake3)"`
IndexName string `default:".chkbit" help:"filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit)"` IndexName string `default:".chkbit" help:"filename where chkbit stores its hashes, needs to start with '.' (default: .chkbit)"`
IgnoreName string `default:".chkbitignore" help:"filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore)"` IgnoreName string `default:".chkbitignore" help:"filename that chkbit reads its ignore list from, needs to start with '.' (default: .chkbitignore)"`
Workers int `short:"w" default:"5" help:"number of workers to use (default: 5)"` Workers int `short:"w" default:"5" help:"number of workers to use (default: 5)"`
@ -66,16 +61,19 @@ var cli struct {
} }
type Main struct { type Main struct {
context *chkbit.Context
dmgList []string dmgList []string
errList []string errList []string
numIdxUpd int
numNew int
numUpd int
verbose bool verbose bool
logger *log.Logger logger *log.Logger
logVerbose bool logVerbose bool
progress Progress progress Progress
total int
termWidth int termWidth int
fps *util.RateCalc fps *RateCalc
bps *util.RateCalc bps *RateCalc
} }
func (m *Main) log(text string) { func (m *Main) log(text string) {
@ -84,36 +82,44 @@ func (m *Main) log(text string) {
func (m *Main) logStatus(stat chkbit.Status, message string) bool { func (m *Main) logStatus(stat chkbit.Status, message string) bool {
if stat == chkbit.STATUS_UPDATE_INDEX { if stat == chkbit.STATUS_UPDATE_INDEX {
return false m.numIdxUpd++
} } else {
if stat == chkbit.STATUS_ERR_DMG {
if stat == chkbit.STATUS_ERR_DMG { m.total++
m.dmgList = append(m.dmgList, message) m.dmgList = append(m.dmgList, message)
} else if stat == chkbit.STATUS_PANIC { } else if stat == chkbit.STATUS_PANIC {
m.errList = append(m.errList, message) m.errList = append(m.errList, message)
} } else if stat == chkbit.STATUS_OK || stat == chkbit.STATUS_UPDATE || stat == chkbit.STATUS_NEW || stat == chkbit.STATUS_UP_WARN_OLD {
m.total++
if m.logVerbose || !stat.IsVerbose() { if stat == chkbit.STATUS_UPDATE || stat == chkbit.STATUS_UP_WARN_OLD {
m.log(stat.String() + " " + message) m.numUpd++
} } else if stat == chkbit.STATUS_NEW {
m.numNew++
if m.verbose || !stat.IsVerbose() { }
col := "" }
if stat.IsErrorOrWarning() {
col = termAlertFG if m.logVerbose || stat != chkbit.STATUS_OK && stat != chkbit.STATUS_IGNORE {
m.log(stat.String() + " " + message)
}
if m.verbose || !stat.IsVerbose() {
col := ""
if stat.IsErrorOrWarning() {
col = termAlertFG
}
lterm.Printline(col, stat.String(), " ", message, lterm.Reset)
return true
} }
lterm.Printline(col, stat.String(), " ", message, lterm.Reset)
return true
} }
return false return false
} }
func (m *Main) showStatus() { func (m *Main) showStatus(context *chkbit.Context) {
last := time.Now().Add(-updateInterval) last := time.Now().Add(-updateInterval)
stat := "" stat := ""
for { for {
select { select {
case item := <-m.context.LogQueue: case item := <-context.LogQueue:
if item == nil { if item == nil {
if m.progress == Fancy { if m.progress == Fancy {
lterm.Printline("") lterm.Printline("")
@ -124,10 +130,10 @@ func (m *Main) showStatus() {
if m.progress == Fancy { if m.progress == Fancy {
lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r") lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r")
} else { } else {
fmt.Print(m.context.NumTotal, "\r") fmt.Print(m.total, "\r")
} }
} }
case perf := <-m.context.PerfQueue: case perf := <-context.PerfQueue:
now := time.Now() now := time.Now()
m.fps.Push(now, perf.NumFiles) m.fps.Push(now, perf.NumFiles)
m.bps.Push(now, perf.NumBytes) m.bps.Push(now, perf.NumBytes)
@ -137,11 +143,11 @@ func (m *Main) showStatus() {
statF := fmt.Sprintf("%d files/s", m.fps.Last()) statF := fmt.Sprintf("%d files/s", m.fps.Last())
statB := fmt.Sprintf("%d MB/s", m.bps.Last()/sizeMB) statB := fmt.Sprintf("%d MB/s", m.bps.Last()/sizeMB)
stat = "RW" stat = "RW"
if !m.context.UpdateIndex { if !context.Update {
stat = "RO" stat = "RO"
} }
stat = fmt.Sprintf("[%s:%d] %5d files $ %s %-13s $ %s %-13s", stat = fmt.Sprintf("[%s:%d] %5d files $ %s %-13s $ %s %-13s",
stat, m.context.NumWorkers, m.context.NumTotal, stat, context.NumWorkers, m.total,
util.Sparkline(m.fps.Stats), statF, util.Sparkline(m.fps.Stats), statF,
util.Sparkline(m.bps.Stats), statB) util.Sparkline(m.bps.Stats), statB)
stat = util.LeftTruncate(stat, m.termWidth-1) stat = util.LeftTruncate(stat, m.termWidth-1)
@ -149,49 +155,38 @@ func (m *Main) showStatus() {
stat = strings.Replace(stat, "$", termSepFG+termSep+termFG3, 1) stat = strings.Replace(stat, "$", termSepFG+termSep+termFG3, 1)
lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r") lterm.Write(termBG, termFG1, stat, lterm.ClearLine(0), lterm.Reset, "\r")
} else if m.progress == Plain { } else if m.progress == Plain {
fmt.Print(m.context.NumTotal, "\r") fmt.Print(m.total, "\r")
} }
} }
} }
} }
} }
func (m *Main) process() bool { func (m *Main) process() *chkbit.Context {
// verify mode if cli.Update && cli.ShowIgnoredOnly {
var b01 = map[bool]int8{false: 0, true: 1} fmt.Println("Error: use either --update or --show-ignored-only!")
if b01[cli.Check]+b01[cli.Update]+b01[cli.AddOnly]+b01[cli.ShowIgnoredOnly] > 1 { return nil
fmt.Println("Error: can only run one mode at a time!")
os.Exit(1)
} }
var err error context, err := chkbit.NewContext(cli.Workers, cli.Force, cli.Update, cli.ShowIgnoredOnly, cli.Algo, cli.SkipSymlinks, cli.IndexName, cli.IgnoreName)
m.context, err = chkbit.NewContext(cli.Workers, cli.Algo, cli.IndexName, cli.IgnoreName)
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
return false return nil
} }
m.context.ForceUpdateDmg = cli.Force
m.context.UpdateIndex = cli.Update || cli.AddOnly
m.context.AddOnly = cli.AddOnly
m.context.ShowIgnoredOnly = cli.ShowIgnoredOnly
m.context.ShowMissing = cli.ShowMissing
m.context.SkipSymlinks = cli.SkipSymlinks
m.context.SkipSubdirectories = cli.NoRecurse
m.context.TrackDirectories = !cli.NoDirInIndex
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
m.showStatus() m.showStatus(context)
}() }()
m.context.Start(cli.Paths) context.Start(cli.Paths)
wg.Wait() wg.Wait()
return true return context
} }
func (m *Main) printResult() { func (m *Main) printResult(context *chkbit.Context) {
cprint := func(col, text string) { cprint := func(col, text string) {
if m.progress != Quiet { if m.progress != Quiet {
if m.progress == Fancy { if m.progress == Fancy {
@ -214,14 +209,14 @@ func (m *Main) printResult() {
if m.progress != Quiet { if m.progress != Quiet {
mode := "" mode := ""
if !m.context.UpdateIndex { if !context.Update {
mode = " in readonly mode" mode = " in readonly mode"
} }
status := fmt.Sprintf("Processed %s%s.", util.LangNum1MutateSuffix(m.context.NumTotal, "file"), mode) status := fmt.Sprintf("Processed %s%s.", util.LangNum1MutateSuffix(m.total, "file"), mode)
cprint(termOKFG, status) cprint(termOKFG, status)
m.log(status) m.log(status)
if m.progress == Fancy && m.context.NumTotal > 0 { if m.progress == Fancy && m.total > 0 {
elapsed := time.Since(m.fps.Start) elapsed := time.Since(m.fps.Start)
elapsedS := elapsed.Seconds() elapsedS := elapsed.Seconds()
fmt.Println("-", elapsed.Truncate(time.Second), "elapsed") fmt.Println("-", elapsed.Truncate(time.Second), "elapsed")
@ -229,26 +224,17 @@ func (m *Main) printResult() {
fmt.Printf("- %.2f MB/second\n", (float64(m.bps.Total)+float64(m.bps.Current))/float64(sizeMB)/elapsedS) fmt.Printf("- %.2f MB/second\n", (float64(m.bps.Total)+float64(m.bps.Current))/float64(sizeMB)/elapsedS)
} }
del := "" if context.Update {
if m.context.UpdateIndex { if m.numIdxUpd > 0 {
if m.context.NumIdxUpd > 0 { cprint(termOKFG, fmt.Sprintf("- %s updated\n- %s added\n- %s updated",
if m.context.NumDel > 0 { util.LangNum1Choice(m.numIdxUpd, "directory was", "directories were"),
del = fmt.Sprintf("\n- %s been removed", util.LangNum1Choice(m.context.NumDel, "file/directory has", "files/directories have")) util.LangNum1Choice(m.numNew, "file hash was", "file hashes were"),
} util.LangNum1Choice(m.numUpd, "file hash was", "file hashes were")))
cprint(termOKFG, fmt.Sprintf("- %s updated\n- %s added\n- %s updated%s",
util.LangNum1Choice(m.context.NumIdxUpd, "directory was", "directories were"),
util.LangNum1Choice(m.context.NumNew, "file hash was", "file hashes were"),
util.LangNum1Choice(m.context.NumUpd, "file hash was", "file hashes were"),
del))
} }
} else if m.context.NumNew+m.context.NumUpd+m.context.NumDel > 0 { } else if m.numNew+m.numUpd > 0 {
if m.context.NumDel > 0 { cprint(termAlertFG, fmt.Sprintf("No changes were made (specify -u to update):\n- %s would have been added and\n- %s would have been updated.",
del = fmt.Sprintf("\n- %s would have been removed", util.LangNum1Choice(m.context.NumDel, "file/directory", "files/directories")) util.LangNum1MutateSuffix(m.numNew, "file"),
} util.LangNum1MutateSuffix(m.numUpd, "file")))
cprint(termAlertFG, fmt.Sprintf("No changes were made (specify -u to update):\n- %s would have been added\n- %s would have been updated%s",
util.LangNum1MutateSuffix(m.context.NumNew, "file"),
util.LangNum1MutateSuffix(m.context.NumUpd, "file"),
del))
} }
} }
@ -324,8 +310,9 @@ func (m *Main) run() {
if len(cli.Paths) > 0 { if len(cli.Paths) > 0 {
m.log("chkbit " + strings.Join(cli.Paths, ", ")) m.log("chkbit " + strings.Join(cli.Paths, ", "))
if m.process() && !m.context.ShowIgnoredOnly { context := m.process()
m.printResult() if context != nil && !context.ShowIgnoredOnly {
m.printResult(context)
} }
} else { } else {
fmt.Println("specify a path to check, see -h") fmt.Println("specify a path to check, see -h")
@ -344,8 +331,8 @@ func main() {
m := &Main{ m := &Main{
logger: log.New(io.Discard, "", 0), logger: log.New(io.Discard, "", 0),
termWidth: termWidth, termWidth: termWidth,
fps: util.NewRateCalc(time.Second, (termWidth-70)/2), fps: NewRateCalc(time.Second, (termWidth-70)/2),
bps: util.NewRateCalc(time.Second, (termWidth-70)/2), bps: NewRateCalc(time.Second, (termWidth-70)/2),
} }
m.run() m.run()
} }

View File

@ -1,4 +1,4 @@
package util package main
import ( import (
"time" "time"

View File

@ -8,32 +8,21 @@ import (
) )
type Context struct { type Context struct {
NumWorkers int NumWorkers int
UpdateIndex bool Force bool
AddOnly bool Update bool
ShowIgnoredOnly bool ShowIgnoredOnly bool
ShowMissing bool HashAlgo string
ForceUpdateDmg bool SkipSymlinks bool
HashAlgo string IndexFilename string
TrackDirectories bool IgnoreFilename string
SkipSymlinks bool WorkQueue chan *WorkItem
SkipSubdirectories bool LogQueue chan *LogEvent
IndexFilename string PerfQueue chan *PerfEvent
IgnoreFilename string wg sync.WaitGroup
WorkQueue chan *WorkItem
LogQueue chan *LogEvent
PerfQueue chan *PerfEvent
wg sync.WaitGroup
mutex sync.Mutex
NumTotal int
NumIdxUpd int
NumNew int
NumUpd int
NumDel int
} }
func NewContext(numWorkers int, hashAlgo string, indexFilename string, ignoreFilename string) (*Context, error) { func NewContext(numWorkers int, force bool, update bool, showIgnoredOnly bool, hashAlgo string, skipSymlinks bool, indexFilename string, ignoreFilename string) (*Context, error) {
if indexFilename[0] != '.' { if indexFilename[0] != '.' {
return nil, errors.New("The index filename must start with a dot!") return nil, errors.New("The index filename must start with a dot!")
} }
@ -44,44 +33,21 @@ func NewContext(numWorkers int, hashAlgo string, indexFilename string, ignoreFil
return nil, errors.New(hashAlgo + " is unknown.") return nil, errors.New(hashAlgo + " is unknown.")
} }
return &Context{ return &Context{
NumWorkers: numWorkers, NumWorkers: numWorkers,
HashAlgo: hashAlgo, Force: force,
IndexFilename: indexFilename, Update: update,
IgnoreFilename: ignoreFilename, ShowIgnoredOnly: showIgnoredOnly,
WorkQueue: make(chan *WorkItem, numWorkers*10), HashAlgo: hashAlgo,
LogQueue: make(chan *LogEvent, numWorkers*100), SkipSymlinks: skipSymlinks,
PerfQueue: make(chan *PerfEvent, numWorkers*10), IndexFilename: indexFilename,
IgnoreFilename: ignoreFilename,
WorkQueue: make(chan *WorkItem, numWorkers*10),
LogQueue: make(chan *LogEvent, numWorkers*100),
PerfQueue: make(chan *PerfEvent, numWorkers*10),
}, nil }, nil
} }
func (context *Context) log(stat Status, message string) { func (context *Context) log(stat Status, message string) {
context.mutex.Lock()
defer context.mutex.Unlock()
switch stat {
case STATUS_ERR_DMG:
context.NumTotal++
case STATUS_UPDATE_INDEX:
context.NumIdxUpd++
case STATUS_UP_WARN_OLD:
context.NumTotal++
context.NumUpd++
case STATUS_UPDATE:
context.NumTotal++
context.NumUpd++
case STATUS_NEW:
context.NumTotal++
context.NumNew++
case STATUS_OK:
if !context.AddOnly {
context.NumTotal++
}
case STATUS_MISSING:
context.NumDel++
//case STATUS_PANIC:
//case STATUS_ERR_IDX:
//case STATUS_IGNORE:
}
context.LogQueue <- &LogEvent{stat, message} context.LogQueue <- &LogEvent{stat, message}
} }
@ -97,8 +63,8 @@ func (context *Context) perfMonBytes(numBytes int64) {
context.PerfQueue <- &PerfEvent{0, numBytes} context.PerfQueue <- &PerfEvent{0, numBytes}
} }
func (context *Context) addWork(path string, filesToIndex []string, dirList []string, ignore *Ignore) { func (context *Context) addWork(path string, filesToIndex []string, ignore *Ignore) {
context.WorkQueue <- &WorkItem{path, filesToIndex, dirList, ignore} context.WorkQueue <- &WorkItem{path, filesToIndex, ignore}
} }
func (context *Context) endWork() { func (context *Context) endWork() {
@ -110,18 +76,12 @@ func (context *Context) isChkbitFile(name string) bool {
} }
func (context *Context) Start(pathList []string) { func (context *Context) Start(pathList []string) {
context.NumTotal = 0
context.NumIdxUpd = 0
context.NumNew = 0
context.NumUpd = 0
context.NumDel = 0
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(context.NumWorkers) wg.Add(context.NumWorkers)
for i := 0; i < context.NumWorkers; i++ { for i := 0; i < context.NumWorkers; i++ {
go func(id int) { go func(id int) {
defer wg.Done() defer wg.Done()
context.runWorker(id) context.RunWorker(id)
}(i) }(i)
} }
go func() { go func() {
@ -161,11 +121,6 @@ func (context *Context) scanDir(root string, parentIgnore *Ignore) {
var dirList []string var dirList []string
var filesToIndex []string var filesToIndex []string
ignore, err := GetIgnore(context, root, parentIgnore)
if err != nil {
context.logErr(root+"/", err)
}
for _, file := range files { for _, file := range files {
path := filepath.Join(root, file.Name()) path := filepath.Join(root, file.Name())
if file.Name()[0] == '.' { if file.Name()[0] == '.' {
@ -175,21 +130,24 @@ func (context *Context) scanDir(root string, parentIgnore *Ignore) {
continue continue
} }
if isDir(file, path) { if isDir(file, path) {
if !ignore.shouldIgnore(file.Name()) { dirList = append(dirList, file.Name())
dirList = append(dirList, file.Name())
} else {
context.log(STATUS_IGNORE, file.Name()+"/")
}
} else if file.Type().IsRegular() { } else if file.Type().IsRegular() {
filesToIndex = append(filesToIndex, file.Name()) filesToIndex = append(filesToIndex, file.Name())
} }
} }
context.addWork(root, filesToIndex, dirList, ignore) ignore, err := GetIgnore(context, root, parentIgnore)
if err != nil {
context.logErr(root+"/", err)
}
if !context.SkipSubdirectories { context.addWork(root, filesToIndex, ignore)
for _, name := range dirList {
for _, name := range dirList {
if !ignore.shouldIgnore(name) {
context.scanDir(filepath.Join(root, name), ignore) context.scanDir(filepath.Join(root, name), ignore)
} else {
context.log(STATUS_IGNORE, name+"/")
} }
} }
} }

2
go.mod
View File

@ -1,4 +1,4 @@
module github.com/laktak/chkbit/v5 module github.com/laktak/chkbit
go 1.22.3 go 1.22.3

View File

@ -50,7 +50,7 @@ func Hashfile(path string, hashAlgo string, perfMonBytes func(int64)) (string, e
return hex.EncodeToString(h.Sum(nil)), nil return hex.EncodeToString(h.Sum(nil)), nil
} }
func hashMd5(data []byte) string { func HashMd5(data []byte) string {
h := md5.New() h := md5.New()
h.Write(data) h.Write(data)
return hex.EncodeToString(h.Sum(nil)) return hex.EncodeToString(h.Sum(nil))

152
index.go
View File

@ -5,7 +5,6 @@ import (
"errors" "errors"
"os" "os"
"path/filepath" "path/filepath"
"slices"
) )
const VERSION = 2 // index version const VERSION = 2 // index version
@ -13,54 +12,47 @@ var (
algoMd5 = "md5" algoMd5 = "md5"
) )
type idxInfo struct { type IdxInfo struct {
ModTime int64 `json:"mod"` ModTime int64 `json:"mod"`
Algo *string `json:"a,omitempty"` Algo *string `json:"a,omitempty"`
Hash *string `json:"h,omitempty"` Hash *string `json:"h,omitempty"`
LegacyHash *string `json:"md5,omitempty"` LegacyHash *string `json:"md5,omitempty"`
} }
type indexFile struct { type IndexFile struct {
V int `json:"v"` V int `json:"v"`
// IdxRaw -> map[string]idxInfo
IdxRaw json.RawMessage `json:"idx"` IdxRaw json.RawMessage `json:"idx"`
IdxHash string `json:"idx_hash"` IdxHash string `json:"idx_hash"`
// 2024-08 optional, list of subdirectories
Dir []string `json:"dirlist,omitempty"`
} }
type idxInfo1 struct { type IdxInfo1 struct {
ModTime int64 `json:"mod"` ModTime int64 `json:"mod"`
Hash string `json:"md5"` Hash string `json:"md5"`
} }
type indexFile1 struct { type IndexFile1 struct {
Data map[string]idxInfo1 `json:"data"` Data map[string]IdxInfo1 `json:"data"`
} }
type Index struct { type Index struct {
context *Context context *Context
path string path string
files []string files []string
cur map[string]idxInfo cur map[string]IdxInfo
new map[string]idxInfo new map[string]IdxInfo
curDirList []string updates []string
newDirList []string modified bool
modified bool readonly bool
readonly bool
} }
func newIndex(context *Context, path string, files []string, dirList []string, readonly bool) *Index { func NewIndex(context *Context, path string, files []string, readonly bool) *Index {
slices.Sort(dirList)
return &Index{ return &Index{
context: context, context: context,
path: path, path: path,
files: files, files: files,
cur: make(map[string]idxInfo), cur: make(map[string]IdxInfo),
new: make(map[string]idxInfo), new: make(map[string]IdxInfo),
curDirList: make([]string, 0), readonly: readonly,
newDirList: dirList,
readonly: readonly,
} }
} }
@ -68,6 +60,10 @@ func (i *Index) getIndexFilepath() string {
return filepath.Join(i.path, i.context.IndexFilename) return filepath.Join(i.path, i.context.IndexFilename)
} }
func (i *Index) setMod(value bool) {
i.modified = value
}
func (i *Index) logFilePanic(name string, message string) { func (i *Index) logFilePanic(name string, message string) {
i.context.log(STATUS_PANIC, filepath.Join(i.path, name)+": "+message) i.context.log(STATUS_PANIC, filepath.Join(i.path, name)+": "+message)
} }
@ -76,10 +72,6 @@ func (i *Index) logFile(stat Status, name string) {
i.context.log(stat, filepath.Join(i.path, name)) i.context.log(stat, filepath.Join(i.path, name))
} }
func (i *Index) logDir(stat Status, name string) {
i.context.log(stat, filepath.Join(i.path, name)+"/")
}
func (i *Index) calcHashes(ignore *Ignore) { func (i *Index) calcHashes(ignore *Ignore) {
for _, name := range i.files { for _, name := range i.files {
if ignore != nil && ignore.shouldIgnore(name) { if ignore != nil && ignore.shouldIgnore(name) {
@ -88,13 +80,13 @@ func (i *Index) calcHashes(ignore *Ignore) {
} }
var err error var err error
var info *idxInfo var info *IdxInfo
algo := i.context.HashAlgo algo := i.context.HashAlgo
if val, ok := i.cur[name]; ok { if val, ok := i.cur[name]; ok {
// existing file // existing
if val.LegacyHash != nil { if val.LegacyHash != nil {
// convert from py1 to new format // convert from py1 to new format
val = idxInfo{ val = IdxInfo{
ModTime: val.ModTime, ModTime: val.ModTime,
Algo: &algoMd5, Algo: &algoMd5,
Hash: val.LegacyHash, Hash: val.LegacyHash,
@ -104,15 +96,10 @@ func (i *Index) calcHashes(ignore *Ignore) {
if val.Algo != nil { if val.Algo != nil {
algo = *val.Algo algo = *val.Algo
} }
if i.context.AddOnly { info, err = i.calcFile(name, algo)
info = &val
} else {
info, err = i.calcFile(name, algo)
}
} else { } else {
// new file
if i.readonly { if i.readonly {
info = &idxInfo{Algo: &algo} info = &IdxInfo{Algo: &algo}
} else { } else {
info, err = i.calcFile(name, algo) info, err = i.calcFile(name, algo)
} }
@ -133,70 +120,41 @@ func (i *Index) showIgnoredOnly(ignore *Ignore) {
} }
} }
func (i *Index) checkFix(forceUpdateDmg bool) { func (i *Index) checkFix(force bool) {
for name, b := range i.new { for name, b := range i.new {
if a, ok := i.cur[name]; !ok { if a, ok := i.cur[name]; !ok {
i.logFile(STATUS_NEW, name) i.logFile(STATUS_NEW, name)
i.modified = true i.setMod(true)
} else { } else {
amod := int64(a.ModTime) amod := int64(a.ModTime)
bmod := int64(b.ModTime) bmod := int64(b.ModTime)
if a.Hash != nil && b.Hash != nil && *a.Hash == *b.Hash { if a.Hash != nil && b.Hash != nil && *a.Hash == *b.Hash {
i.logFile(STATUS_OK, name) i.logFile(STATUS_OK, name)
if amod != bmod { if amod != bmod {
i.modified = true i.setMod(true)
} }
continue continue
} }
if amod == bmod { if amod == bmod {
i.logFile(STATUS_ERR_DMG, name) i.logFile(STATUS_ERR_DMG, name)
if !forceUpdateDmg { if !force {
// keep DMG entry
i.new[name] = a i.new[name] = a
} else { } else {
i.modified = true i.setMod(true)
} }
} else if amod < bmod { } else if amod < bmod {
i.logFile(STATUS_UPDATE, name) i.logFile(STATUS_UPDATE, name)
i.modified = true i.setMod(true)
} else if amod > bmod { } else if amod > bmod {
i.logFile(STATUS_UP_WARN_OLD, name) i.logFile(STATUS_UP_WARN_OLD, name)
i.modified = true i.setMod(true)
} }
} }
} }
// track missing
for name := range i.cur {
if _, ok := i.new[name]; !ok {
i.modified = true
if i.context.ShowMissing {
i.logFile(STATUS_MISSING, name)
}
}
}
// dirs
m := make(map[string]bool)
for _, n := range i.newDirList {
m[n] = true
}
for _, name := range i.curDirList {
if !m[name] {
i.modified = true
if i.context.ShowMissing {
i.logDir(STATUS_MISSING, name+"/")
}
}
}
if len(i.newDirList) != len(i.curDirList) {
// added
i.modified = true
}
} }
func (i *Index) calcFile(name string, a string) (*idxInfo, error) { func (i *Index) calcFile(name string, a string) (*IdxInfo, error) {
path := filepath.Join(i.path, name) path := filepath.Join(i.path, name)
info, _ := os.Stat(path) info, _ := os.Stat(path)
mtime := int64(info.ModTime().UnixNano() / 1e6) mtime := int64(info.ModTime().UnixNano() / 1e6)
@ -205,7 +163,7 @@ func (i *Index) calcFile(name string, a string) (*idxInfo, error) {
return nil, err return nil, err
} }
i.context.perfMonFiles(1) i.context.perfMonFiles(1)
return &idxInfo{ return &IdxInfo{
ModTime: mtime, ModTime: mtime,
Algo: &a, Algo: &a,
Hash: &h, Hash: &h,
@ -222,13 +180,10 @@ func (i *Index) save() (bool, error) {
if err != nil { if err != nil {
return false, err return false, err
} }
data := indexFile{ data := IndexFile{
V: VERSION, V: VERSION,
IdxRaw: text, IdxRaw: text,
IdxHash: hashMd5(text), IdxHash: HashMd5(text),
}
if i.context.TrackDirectories {
data.Dir = i.newDirList
} }
file, err := json.Marshal(data) file, err := json.Marshal(data)
@ -239,7 +194,7 @@ func (i *Index) save() (bool, error) {
if err != nil { if err != nil {
return false, err return false, err
} }
i.modified = false i.setMod(false)
return true, nil return true, nil
} else { } else {
return false, nil return false, nil
@ -253,12 +208,12 @@ func (i *Index) load() error {
} }
return err return err
} }
i.modified = false i.setMod(false)
file, err := os.ReadFile(i.getIndexFilepath()) file, err := os.ReadFile(i.getIndexFilepath())
if err != nil { if err != nil {
return err return err
} }
var data indexFile var data IndexFile
err = json.Unmarshal(file, &data) err = json.Unmarshal(file, &data)
if err != nil { if err != nil {
return err return err
@ -269,22 +224,22 @@ func (i *Index) load() error {
return err return err
} }
text := data.IdxRaw text := data.IdxRaw
if data.IdxHash != hashMd5(text) { if data.IdxHash != HashMd5(text) {
// old versions may have saved the JSON encoded with extra spaces // old versions may have save the JSON encoded with extra spaces
text, _ = json.Marshal(data.IdxRaw) text, _ = json.Marshal(data.IdxRaw)
} else { } else {
} }
if data.IdxHash != hashMd5(text) { if data.IdxHash != HashMd5(text) {
i.modified = true i.setMod(true)
i.logFile(STATUS_ERR_IDX, i.getIndexFilepath()) i.logFile(STATUS_ERR_IDX, i.getIndexFilepath())
} }
} else { } else {
var data1 indexFile1 var data1 IndexFile1
json.Unmarshal(file, &data1) json.Unmarshal(file, &data1)
if data1.Data != nil { if data1.Data != nil {
// convert from js to new format // convert from js to new format
for name, item := range data1.Data { for name, item := range data1.Data {
i.cur[name] = idxInfo{ i.cur[name] = IdxInfo{
ModTime: item.ModTime, ModTime: item.ModTime,
Algo: &algoMd5, Algo: &algoMd5,
Hash: &item.Hash, Hash: &item.Hash,
@ -292,12 +247,5 @@ func (i *Index) load() error {
} }
} }
} }
// dirs
if data.Dir != nil {
slices.Sort(data.Dir)
i.curDirList = data.Dir
}
return nil return nil
} }

View File

@ -11,8 +11,6 @@ import (
"time" "time"
) )
// perform integration test using the compiled binary
var testDir = "/tmp/chkbit" var testDir = "/tmp/chkbit"
func getCmd() string { func getCmd() string {
@ -27,12 +25,6 @@ func checkOut(t *testing.T, sout string, expected string) {
} }
} }
func checkNotOut(t *testing.T, sout string, notExpected string) {
if strings.Contains(sout, notExpected) {
t.Errorf("Did not expect '%s' in output, got '%s'\n", notExpected, sout)
}
}
// misc files // misc files
var ( var (
@ -67,12 +59,7 @@ func setDate(filename string, r int) {
os.Chtimes(filename, date, date) os.Chtimes(filename, date, date)
} }
func genFile(path string, size int) { func genFile(dir string, a int) {
os.WriteFile(path, make([]byte, size), 0644)
setDate(path, size*size)
}
func genFiles(dir string, a int) {
os.MkdirAll(dir, 0755) os.MkdirAll(dir, 0755)
for i := 1; i <= 5; i++ { for i := 1; i <= 5; i++ {
size := a*i*wordIdx*100 + extIdx size := a*i*wordIdx*100 + extIdx
@ -83,7 +70,9 @@ func genFiles(dir string, a int) {
} }
file += "." + nextExt() file += "." + nextExt()
genFile(filepath.Join(dir, file), size) path := filepath.Join(dir, file)
os.WriteFile(path, make([]byte, size), 0644)
setDate(path, size*size)
} }
} }
@ -92,11 +81,11 @@ func genDir(root string) {
for i := 1; i <= 5; i++ { for i := 1; i <= 5; i++ {
dir := filepath.Join(root, start, nextWord()) dir := filepath.Join(root, start, nextWord())
genFiles(dir, 1) genFile(dir, 1)
if wordIdx%3 == 0 { if wordIdx%3 == 0 {
dir = filepath.Join(dir, nextWord()) dir = filepath.Join(dir, nextWord())
genFiles(dir, 1) genFile(dir, 1)
} }
} }
} }
@ -118,8 +107,6 @@ func setupMiscFiles() {
genDir(root) genDir(root)
os.MkdirAll(filepath.Join(root, "day/car/empty"), 0755)
rootPeople := filepath.Join(root, "people") rootPeople := filepath.Join(root, "people")
testPeople := filepath.Join(testDir, "people") testPeople := filepath.Join(testDir, "people")
@ -141,125 +128,14 @@ func TestRoot(t *testing.T) {
tool := getCmd() tool := getCmd()
root := filepath.Join(testDir, "root") root := filepath.Join(testDir, "root")
cmd := exec.Command(tool, "-u", root)
// update index, no recourse out, err := cmd.Output()
t.Run("no-recourse", func(t *testing.T) { if err != nil {
cmd := exec.Command(tool, "-umR", filepath.Join(root, "day/office")) t.Fatalf("cmd.Output() failed with '%s'\n", err)
out, err := cmd.Output() }
if err != nil { sout := string(out)
t.Fatalf("failed with '%s'\n", err) checkOut(t, sout, "60 directories were updated")
} checkOut(t, sout, "300 file hashes were added")
sout := string(out)
checkOut(t, sout, "Processed 5 files")
checkOut(t, sout, "- 1 directory was updated")
checkOut(t, sout, "- 5 file hashes were added")
checkOut(t, sout, "- 0 file hashes were updated")
checkNotOut(t, sout, "removed")
})
// update remaining index from root
t.Run("update-remaining", func(t *testing.T) {
cmd := exec.Command(tool, "-um", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 300 files")
checkOut(t, sout, "- 66 directories were updated")
checkOut(t, sout, "- 295 file hashes were added")
checkOut(t, sout, "- 0 file hashes were updated")
checkNotOut(t, sout, "removed")
})
// delete files, check for missing
t.Run("delete", func(t *testing.T) {
os.RemoveAll(filepath.Join(root, "thing/change"))
os.Remove(filepath.Join(root, "time/hour/minute/body-information.csv"))
cmd := exec.Command(tool, "-m", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "del /tmp/chkbit/root/thing/change/")
checkOut(t, sout, "2 files/directories would have been removed")
})
// do not report missing without -m
t.Run("no-missing", func(t *testing.T) {
cmd := exec.Command(tool, root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkNotOut(t, sout, "del ")
checkNotOut(t, sout, "removed")
})
// check for missing and update
t.Run("missing", func(t *testing.T) {
cmd := exec.Command(tool, "-um", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "del /tmp/chkbit/root/thing/change/")
checkOut(t, sout, "2 files/directories have been removed")
})
// check again
t.Run("repeat", func(t *testing.T) {
for i := 0; i < 10; i++ {
cmd := exec.Command(tool, "-uv", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 289 files")
checkNotOut(t, sout, "removed")
checkNotOut(t, sout, "updated")
checkNotOut(t, sout, "added")
}
})
// add files only
t.Run("add-only", func(t *testing.T) {
genFiles(filepath.Join(root, "way/add"), 99)
genFile(filepath.Join(root, "time/add-file.txt"), 500)
// modify existing, will not be reported:
genFile(filepath.Join(root, "way/job/word-business.mp3"), 500)
cmd := exec.Command(tool, "-a", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 6 files")
checkOut(t, sout, "- 3 directories were updated")
checkOut(t, sout, "- 6 file hashes were added")
checkOut(t, sout, "- 0 file hashes were updated")
})
// update remaining
t.Run("update-remaining-add", func(t *testing.T) {
cmd := exec.Command(tool, "-u", root)
out, err := cmd.Output()
if err != nil {
t.Fatalf("failed with '%s'\n", err)
}
sout := string(out)
checkOut(t, sout, "Processed 295 files")
checkOut(t, sout, "- 1 directory was updated")
checkOut(t, sout, "- 0 file hashes were added")
checkOut(t, sout, "- 1 file hash was updated")
})
} }
func TestDMG(t *testing.T) { func TestDMG(t *testing.T) {
@ -285,58 +161,50 @@ func TestDMG(t *testing.T) {
t2, _ := time.Parse(time.RFC3339, "2022-02-01T12:00:00Z") t2, _ := time.Parse(time.RFC3339, "2022-02-01T12:00:00Z")
t3, _ := time.Parse(time.RFC3339, "2022-02-01T13:00:00Z") t3, _ := time.Parse(time.RFC3339, "2022-02-01T13:00:00Z")
// create test and set the modified time" // step1: create test and set the modified time"
t.Run("create", func(t *testing.T) { os.WriteFile(testFile, []byte("foo1"), 0644)
os.WriteFile(testFile, []byte("foo1"), 0644) os.Chtimes(testFile, t2, t2)
os.Chtimes(testFile, t2, t2)
cmd := exec.Command(tool, "-u", ".") cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil { if out, err := cmd.Output(); err != nil {
t.Fatalf("failed with '%s'\n", err) t.Fatalf("step1 failed with '%s'\n", err)
} else { } else {
checkOut(t, string(out), "new test.txt") checkOut(t, string(out), "new test.txt")
}
// step2: update test with different content & old modified (expect 'old')"
os.WriteFile(testFile, []byte("foo2"), 0644)
os.Chtimes(testFile, t1, t1)
cmd = exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("step2 failed with '%s'\n", err)
} else {
checkOut(t, string(out), "old test.txt")
}
// step3: update test & new modified (expect 'upd')"
os.WriteFile(testFile, []byte("foo3"), 0644)
os.Chtimes(testFile, t3, t3)
cmd = exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("step3 failed with '%s'\n", err)
} else {
checkOut(t, string(out), "upd test.txt")
}
// step4: Now update test with the same modified to simulate damage (expect DMG)"
os.WriteFile(testFile, []byte("foo4"), 0644)
os.Chtimes(testFile, t3, t3)
cmd = exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
if cmd.ProcessState.ExitCode() != 1 {
t.Fatalf("step4 expected to fail with exit code 1 vs %d!", cmd.ProcessState.ExitCode())
} }
}) checkOut(t, string(out), "DMG test.txt")
} else {
// update test with different content & old modified (expect 'old')" t.Fatal("step4 expected to fail!")
t.Run("expect-old", func(t *testing.T) { }
os.WriteFile(testFile, []byte("foo2"), 0644)
os.Chtimes(testFile, t1, t1)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("failed with '%s'\n", err)
} else {
checkOut(t, string(out), "old test.txt")
}
})
// update test & new modified (expect 'upd')"
t.Run("expect-upd", func(t *testing.T) {
os.WriteFile(testFile, []byte("foo3"), 0644)
os.Chtimes(testFile, t3, t3)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
t.Fatalf("failed with '%s'\n", err)
} else {
checkOut(t, string(out), "upd test.txt")
}
})
// Now update test with the same modified to simulate damage (expect DMG)"
t.Run("expect-DMG", func(t *testing.T) {
os.WriteFile(testFile, []byte("foo4"), 0644)
os.Chtimes(testFile, t3, t3)
cmd := exec.Command(tool, "-u", ".")
if out, err := cmd.Output(); err != nil {
if cmd.ProcessState.ExitCode() != 1 {
t.Fatalf("expected to fail with exit code 1 vs %d!", cmd.ProcessState.ExitCode())
}
checkOut(t, string(out), "DMG test.txt")
} else {
t.Fatal("expected to fail!")
}
})
} }

View File

@ -7,5 +7,5 @@ cd $script_dir/..
# prep # prep
$script_dir/build $script_dir/build
go test -v ./cmd/chkbit/util -count=1 go test -v ./cmd/chkbit/util
go test -v ./scripts -count=1 go test -v ./scripts

View File

@ -12,7 +12,6 @@ const (
STATUS_NEW Status = "new" STATUS_NEW Status = "new"
STATUS_OK Status = "ok " STATUS_OK Status = "ok "
STATUS_IGNORE Status = "ign" STATUS_IGNORE Status = "ign"
STATUS_MISSING Status = "del"
) )
func (s Status) String() string { func (s Status) String() string {

View File

@ -3,18 +3,17 @@ package chkbit
type WorkItem struct { type WorkItem struct {
path string path string
filesToIndex []string filesToIndex []string
dirList []string
ignore *Ignore ignore *Ignore
} }
func (context *Context) runWorker(id int) { func (context *Context) RunWorker(id int) {
for { for {
item := <-context.WorkQueue item := <-context.WorkQueue
if item == nil { if item == nil {
break break
} }
index := newIndex(context, item.path, item.filesToIndex, item.dirList, !context.UpdateIndex) index := NewIndex(context, item.path, item.filesToIndex, !context.Update)
err := index.load() err := index.load()
if err != nil { if err != nil {
context.log(STATUS_PANIC, index.getIndexFilepath()+": "+err.Error()) context.log(STATUS_PANIC, index.getIndexFilepath()+": "+err.Error())
@ -24,9 +23,9 @@ func (context *Context) runWorker(id int) {
index.showIgnoredOnly(item.ignore) index.showIgnoredOnly(item.ignore)
} else { } else {
index.calcHashes(item.ignore) index.calcHashes(item.ignore)
index.checkFix(context.ForceUpdateDmg) index.checkFix(context.Force)
if context.UpdateIndex { if context.Update {
if changed, err := index.save(); err != nil { if changed, err := index.save(); err != nil {
context.logErr(item.path, err) context.logErr(item.path, err)
} else if changed { } else if changed {