mirror of
https://github.com/tw93/Mole.git
synced 2026-02-04 09:46:44 +00:00
feat: Enhance clean, optimize, analyze, and status commands, and update security audit documentation.
This commit is contained in:
1
.github/copilot-instructions.md
vendored
1
.github/copilot-instructions.md
vendored
@@ -1 +0,0 @@
|
||||
../AGENT.md
|
||||
154
AGENT.md
154
AGENT.md
@@ -1,130 +1,46 @@
|
||||
# Mole AI Agent Documentation
|
||||
# Mole AI Agent Notes
|
||||
|
||||
> **READ THIS FIRST**: This file serves as the single source of truth for any AI agent trying to work on the Mole repository. It aggregates architectural context, development workflows, and behavioral guidelines.
|
||||
Use this file as the single source of truth for how to work on Mole.
|
||||
|
||||
## 1. Philosophy & Guidelines
|
||||
## Principles
|
||||
|
||||
### Core Philosophy
|
||||
- Safety first: never risk user data or system stability.
|
||||
- Never run destructive operations that could break the user's machine.
|
||||
- Do not delete user-important files; cleanup must be conservative and reversible.
|
||||
- Always use `safe_*` helpers (no raw `rm -rf`).
|
||||
- Keep changes small and confirm uncertain behavior.
|
||||
- Follow the local code style in the file you are editing (Bash 3.2 compatible).
|
||||
- Comments must be English, concise, and intent-focused.
|
||||
- Use comments for safety boundaries, non-obvious logic, or flow context.
|
||||
- Entry scripts start with ~3 short lines describing purpose/behavior.
|
||||
- Do not remove installer flags `--prefix`/`--config` (update flow depends on them).
|
||||
- Do not commit or submit code changes unless explicitly requested.
|
||||
|
||||
- **Safety First**: Never risk user data. Always use `safe_*` wrappers. When in doubt, ask.
|
||||
- **Incremental Progress**: Break complex tasks into manageable stages.
|
||||
- **Clear Intent**: Prioritize readability and maintainability over clever hacks.
|
||||
- **Native Performance**: Use Go for heavy lifting (scanning), Bash for system glue.
|
||||
## Architecture
|
||||
|
||||
### Eight Honors and Eight Shames
|
||||
- `mole`: main CLI entrypoint (menu + command routing).
|
||||
- `mo`: CLI alias wrapper.
|
||||
- `install.sh`: manual installer/updater (download/build + install).
|
||||
- `bin/`: command entry points (`clean.sh`, `uninstall.sh`, `optimize.sh`, `purge.sh`, `touchid.sh`,
|
||||
`analyze.sh`, `status.sh`).
|
||||
- `lib/`: shell logic (`core/`, `clean/`, `ui/`).
|
||||
- `cmd/`: Go apps (`analyze/`, `status/`).
|
||||
- `scripts/`: build/test helpers.
|
||||
- `tests/`: BATS integration tests.
|
||||
|
||||
- **Shame** in guessing APIs, **Honor** in careful research.
|
||||
- **Shame** in vague execution, **Honor** in seeking confirmation.
|
||||
- **Shame** in assuming business logic, **Honor** in human verification.
|
||||
- **Shame** in creating interfaces, **Honor** in reusing existing ones.
|
||||
- **Shame** in skipping validation, **Honor** in proactive testing.
|
||||
- **Shame** in breaking architecture, **Honor** in following specifications.
|
||||
- **Shame** in pretending to understand, **Honor** in honest ignorance.
|
||||
- **Shame** in blind modification, **Honor** in careful refactoring.
|
||||
## Workflow
|
||||
|
||||
### Quality Standards
|
||||
- Shell work: add logic under `lib/`, call from `bin/`.
|
||||
- Go work: edit `cmd/<app>/*.go`.
|
||||
- Prefer dry-run modes while validating cleanup behavior.
|
||||
|
||||
- **English Only**: Comments and code must be in English.
|
||||
- **No Unnecessary Comments**: Code should be self-explanatory.
|
||||
- **Pure Shell Style**: Use `[[ ]]` over `[ ]`, avoid `local var` assignments on definition line if exit code matters.
|
||||
- **Go Formatting**: Always run `gofmt` (or let the build script do it).
|
||||
## Build & Test
|
||||
|
||||
## 2. Project Identity
|
||||
- `./scripts/run-tests.sh` runs lint/shell/go tests.
|
||||
- `make build` builds Go binaries for local development.
|
||||
- `go run ./cmd/analyze` for dev runs without building.
|
||||
|
||||
- **Name**: Mole
|
||||
- **Purpose**: A lightweight, robust macOS cleanup and system analysis tool.
|
||||
- **Core Value**: Native, fast, safe, and dependency-free (pure Bash + static Go binary).
|
||||
- **Mechanism**:
|
||||
- **Cleaning**: Pure Bash scripts for transparency and safety.
|
||||
- **Analysis**: High-concurrency Go TUI (Bubble Tea) for disk scanning.
|
||||
- **Monitoring**: Real-time Go TUI for system status.
|
||||
## Key Behaviors
|
||||
|
||||
## 3. Technology Stack
|
||||
|
||||
- **Shell**: Bash 3.2+ (macOS default compatible).
|
||||
- **Go**: Latest Stable (Bubble Tea framework).
|
||||
- **Testing**:
|
||||
- **Shell**: `bats-core`, `shellcheck`.
|
||||
- **Go**: Native `testing` package.
|
||||
|
||||
## 4. Repository Architecture
|
||||
|
||||
### Directory Structure
|
||||
|
||||
- **`bin/`**: Standalone entry points.
|
||||
- `mole`: Main CLI wrapper.
|
||||
- `clean.sh`, `uninstall.sh`: Logic wrappers calling `lib/`.
|
||||
- **`cmd/`**: Go applications.
|
||||
- `analyze/`: Disk space analyzer (concurrent, TUI).
|
||||
- `status/`: System monitor (TUI).
|
||||
- **`lib/`**: Core Shell Logic.
|
||||
- `core/`: Low-level utilities (logging, `safe_remove`, sudo helpers).
|
||||
- `clean/`: Domain-specific cleanup tasks (`brew`, `caches`, `system`).
|
||||
- `ui/`: Reusable TUI components (`menu_paginated.sh`).
|
||||
- **`scripts/`**: Development tools (`run-tests.sh`, `build-analyze.sh`).
|
||||
- **`tests/`**: BATS integration tests.
|
||||
|
||||
## 5. Key Workflows
|
||||
|
||||
### Development
|
||||
|
||||
1. **Understand**: Read `lib/core/` to know what tools are available.
|
||||
2. **Implement**:
|
||||
- For Shell: Add functions to `lib/`, source them in `bin/`.
|
||||
- For Go: Edit `cmd/app/*.go`.
|
||||
3. **Verify**: Use dry-run modes first.
|
||||
|
||||
**Commands**:
|
||||
|
||||
- `./scripts/run-tests.sh`: **Run EVERYTHING** (Lint, Syntax, Unit, Go).
|
||||
- `./bin/clean.sh --dry-run`: Test cleanup logic safely.
|
||||
- `go run ./cmd/analyze`: Run analyzer in dev mode.
|
||||
|
||||
### Building
|
||||
|
||||
- `./scripts/build-analyze.sh`: Compiles `analyze-go` binary (Universal).
|
||||
- `./scripts/build-status.sh`: Compiles `status-go` binary.
|
||||
|
||||
### Release
|
||||
|
||||
- Versions managed via git tags.
|
||||
- Build scripts embed version info into binaries.
|
||||
|
||||
## 6. Implementation Details
|
||||
|
||||
### Safety System (`lib/core/file_ops.sh`)
|
||||
|
||||
- **Crucial**: Never use `rm -rf` directly.
|
||||
- **Use**:
|
||||
- `safe_remove "/path"`
|
||||
- `safe_find_delete "/path" "*.log" 7 "f"`
|
||||
- **Protection**:
|
||||
- `validate_path_for_deletion` prevents root/system deletion.
|
||||
- `checks` ensure path is absolute and safe.
|
||||
|
||||
### Go Concurrency (`cmd/analyze`)
|
||||
|
||||
- **Worker Pool**: Tuned dynamically (16-64 workers) to respect system load.
|
||||
- **Throttling**: UI updates throttled (every 100 items) to keep TUI responsive (80ms tick).
|
||||
- **Memory**: Uses Heaps for top-file tracking to minimize RAM usage.
|
||||
|
||||
### TUI Unification
|
||||
|
||||
- **Keybindings**: `j/k` (Nav), `space` (Select), `enter` (Action), `R` (Refresh).
|
||||
- **Style**: Compact footers ` | ` and standard colors defined in `lib/core/base.sh` or Go constants.
|
||||
|
||||
## 7. Common AI Tasks
|
||||
|
||||
- **Adding a Cleanup Task**:
|
||||
1. Create/Edit `lib/clean/topic.sh`.
|
||||
2. Define `clean_topic()`.
|
||||
3. Register in `lib/optimize/tasks.sh` or `bin/clean.sh`.
|
||||
4. **MUST** use `safe_*` functions.
|
||||
- **Modifying Go UI**:
|
||||
1. Update `model` struct in `main.go`.
|
||||
2. Update `View()` in `view.go`.
|
||||
3. Run `./scripts/build-analyze.sh` to test.
|
||||
- **Fixing a Bug**:
|
||||
1. Reproduce with a new BATS test in `tests/`.
|
||||
2. Fix logic.
|
||||
3. Verify with `./scripts/run-tests.sh`.
|
||||
- `mole update` uses `install.sh` with `--prefix`/`--config`; keep these flags.
|
||||
- Cleanup must go through `safe_*` and respect protection lists.
|
||||
|
||||
@@ -5,9 +5,6 @@
|
||||
```bash
|
||||
# Install development tools
|
||||
brew install shfmt shellcheck bats-core
|
||||
|
||||
# Install git hooks (validates universal binaries)
|
||||
./scripts/setup-hooks.sh
|
||||
```
|
||||
|
||||
## Development
|
||||
@@ -31,7 +28,7 @@ Individual commands:
|
||||
./scripts/format.sh
|
||||
|
||||
# Run tests only
|
||||
./tests/run.sh
|
||||
./scripts/run-tests.sh
|
||||
```
|
||||
|
||||
## Code Style
|
||||
@@ -158,23 +155,20 @@ Format: `[MODULE_NAME] message` output to stderr.
|
||||
- Run `go vet ./cmd/...` to check for issues
|
||||
- Build with `go build ./...` to verify all packages compile
|
||||
|
||||
**Building Universal Binaries:**
|
||||
**Building Go Binaries:**
|
||||
|
||||
⚠️ **IMPORTANT**: Never use `go build` directly to create `bin/analyze-go` or `bin/status-go`!
|
||||
|
||||
Mole must support both Intel and Apple Silicon Macs. Always use the build scripts:
|
||||
For local development:
|
||||
|
||||
```bash
|
||||
# Build universal binaries (x86_64 + arm64)
|
||||
./scripts/build-analyze.sh
|
||||
./scripts/build-status.sh
|
||||
# Build binaries for current architecture
|
||||
make build
|
||||
|
||||
# Or run directly without building
|
||||
go run ./cmd/analyze
|
||||
go run ./cmd/status
|
||||
```
|
||||
|
||||
For local development/testing, you can use:
|
||||
- `go run ./cmd/status` or `go run ./cmd/analyze` (quick iteration)
|
||||
- `go build ./cmd/status` (creates single-arch binary for testing)
|
||||
|
||||
The pre-commit hook will prevent you from accidentally committing non-universal binaries.
|
||||
For releases, GitHub Actions builds architecture-specific binaries automatically.
|
||||
|
||||
**Guidelines:**
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
**Security Audit & Compliance Report**
|
||||
|
||||
Version 1.15.9 | December 29, 2025
|
||||
Version 1.17.0 | December 31, 2025
|
||||
|
||||
---
|
||||
|
||||
@@ -31,9 +31,9 @@ Version 1.15.9 | December 29, 2025
|
||||
|
||||
| Attribute | Details |
|
||||
|-----------|---------|
|
||||
| Audit Date | December 29, 2025 |
|
||||
| Audit Date | December 31, 2025 |
|
||||
| Audit Conclusion | **PASSED** |
|
||||
| Mole Version | V1.15.9 |
|
||||
| Mole Version | V1.17.0 |
|
||||
| Audited Branch | `main` (HEAD) |
|
||||
| Scope | Shell scripts, Go binaries, Configuration |
|
||||
| Methodology | Static analysis, Threat modeling, Code review |
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Entry point for the Go-based disk analyzer binary bundled with Mole.
|
||||
# Mole - Analyze command.
|
||||
# Runs the Go disk analyzer UI.
|
||||
# Uses bundled analyze-go binary.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
|
||||
136
bin/clean.sh
136
bin/clean.sh
@@ -1,6 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Mole - Deeper system cleanup
|
||||
# Complete cleanup with smart password handling
|
||||
# Mole - Clean command.
|
||||
# Runs cleanup modules with optional sudo.
|
||||
# Supports dry-run and whitelist.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -88,8 +89,7 @@ else
|
||||
WHITELIST_PATTERNS=("${DEFAULT_WHITELIST_PATTERNS[@]}")
|
||||
fi
|
||||
|
||||
# Pre-expand tildes in whitelist patterns once to avoid repetitive expansion in loops
|
||||
# This significantly improves performance when checking thousands of files
|
||||
# Expand whitelist patterns once to avoid repeated tilde expansion in hot loops.
|
||||
expand_whitelist_patterns() {
|
||||
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||
local -a EXPANDED_PATTERNS
|
||||
@@ -112,7 +112,7 @@ if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||
done
|
||||
fi
|
||||
|
||||
# Global tracking variables (initialized in perform_cleanup)
|
||||
# Section tracking and summary counters.
|
||||
total_items=0
|
||||
TRACK_SECTION=0
|
||||
SECTION_ACTIVITY=0
|
||||
@@ -127,31 +127,25 @@ note_activity() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup background processes
|
||||
CLEANUP_DONE=false
|
||||
# shellcheck disable=SC2329
|
||||
cleanup() {
|
||||
local signal="${1:-EXIT}"
|
||||
local exit_code="${2:-$?}"
|
||||
|
||||
# Prevent multiple executions
|
||||
if [[ "$CLEANUP_DONE" == "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
CLEANUP_DONE=true
|
||||
|
||||
# Stop any inline spinner
|
||||
stop_inline_spinner 2> /dev/null || true
|
||||
|
||||
# Clear any spinner output - spinner outputs to stderr
|
||||
if [[ -t 1 ]]; then
|
||||
printf "\r\033[K" >&2 || true
|
||||
fi
|
||||
|
||||
# Clean up temporary files
|
||||
cleanup_temp_files
|
||||
|
||||
# Stop sudo session
|
||||
stop_sudo_session
|
||||
|
||||
show_cursor
|
||||
@@ -172,7 +166,6 @@ start_section() {
|
||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Preparing..."
|
||||
fi
|
||||
|
||||
# Write section header to export list in dry-run mode
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
ensure_user_file "$EXPORT_LIST_FILE"
|
||||
echo "" >> "$EXPORT_LIST_FILE"
|
||||
@@ -240,11 +233,9 @@ normalize_paths_for_cleanup() {
|
||||
get_cleanup_path_size_kb() {
|
||||
local path="$1"
|
||||
|
||||
# Optimization: Use stat for regular files (much faster than du)
|
||||
if [[ -f "$path" && ! -L "$path" ]]; then
|
||||
if command -v stat > /dev/null 2>&1; then
|
||||
local bytes
|
||||
# macOS/BSD stat
|
||||
bytes=$(stat -f%z "$path" 2> /dev/null || echo "0")
|
||||
if [[ "$bytes" =~ ^[0-9]+$ && "$bytes" -gt 0 ]]; then
|
||||
echo $(((bytes + 1023) / 1024))
|
||||
@@ -286,9 +277,7 @@ safe_clean() {
|
||||
description="$1"
|
||||
targets=("$1")
|
||||
else
|
||||
# Get last argument as description
|
||||
description="${*: -1}"
|
||||
# Get all arguments except last as targets array
|
||||
targets=("${@:1:$#-1}")
|
||||
fi
|
||||
|
||||
@@ -305,12 +294,10 @@ safe_clean() {
|
||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning ${#targets[@]} items..."
|
||||
fi
|
||||
|
||||
# Optimized parallel processing for better performance
|
||||
local -a existing_paths=()
|
||||
for path in "${targets[@]}"; do
|
||||
local skip=false
|
||||
|
||||
# Centralized protection for critical apps and system components
|
||||
if should_protect_path "$path"; then
|
||||
skip=true
|
||||
((skipped_count++))
|
||||
@@ -318,7 +305,6 @@ safe_clean() {
|
||||
|
||||
[[ "$skip" == "true" ]] && continue
|
||||
|
||||
# Check user-defined whitelist
|
||||
if is_path_whitelisted "$path"; then
|
||||
skip=true
|
||||
((skipped_count++))
|
||||
@@ -333,7 +319,6 @@ safe_clean() {
|
||||
|
||||
debug_log "Cleaning: $description (${#existing_paths[@]} items)"
|
||||
|
||||
# Update global whitelist skip counter
|
||||
if [[ $skipped_count -gt 0 ]]; then
|
||||
((whitelist_skipped_count += skipped_count))
|
||||
fi
|
||||
@@ -355,7 +340,6 @@ safe_clean() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Only show spinner if we have enough items to justify it (>10 items)
|
||||
local show_spinner=false
|
||||
if [[ ${#existing_paths[@]} -gt 10 ]]; then
|
||||
show_spinner=true
|
||||
@@ -363,14 +347,11 @@ safe_clean() {
|
||||
if [[ -t 1 ]]; then MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning items..."; fi
|
||||
fi
|
||||
|
||||
# For larger batches, precompute sizes in parallel for better UX/stat accuracy.
|
||||
if [[ ${#existing_paths[@]} -gt 3 ]]; then
|
||||
local temp_dir
|
||||
# create_temp_dir uses mktemp -d for secure temporary directory creation
|
||||
temp_dir=$(create_temp_dir)
|
||||
|
||||
# Check if we have many small files - in that case parallel overhead > benefit
|
||||
# If most items are files (not dirs), avoidance of subshells is faster
|
||||
# Sample up to 20 items or 20% of items (whichever is larger) for better accuracy
|
||||
local dir_count=0
|
||||
local sample_size=$((${#existing_paths[@]} > 20 ? 20 : ${#existing_paths[@]}))
|
||||
local max_sample=$((${#existing_paths[@]} * 20 / 100))
|
||||
@@ -380,8 +361,7 @@ safe_clean() {
|
||||
[[ -d "${existing_paths[i]}" ]] && ((dir_count++))
|
||||
done
|
||||
|
||||
# If we have mostly files and few directories, use sequential processing
|
||||
# Subshells for 50+ files is very slow compared to direct stat
|
||||
# Heuristic: mostly files -> sequential stat is faster than subshells.
|
||||
if [[ $dir_count -lt 5 && ${#existing_paths[@]} -gt 20 ]]; then
|
||||
if [[ -t 1 && "$show_spinner" == "false" ]]; then
|
||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning items..."
|
||||
@@ -395,7 +375,6 @@ safe_clean() {
|
||||
size=$(get_cleanup_path_size_kb "$path")
|
||||
[[ ! "$size" =~ ^[0-9]+$ ]] && size=0
|
||||
|
||||
# Write result to file to maintain compatibility with the logic below
|
||||
if [[ "$size" -gt 0 ]]; then
|
||||
echo "$size 1" > "$temp_dir/result_${idx}"
|
||||
else
|
||||
@@ -403,14 +382,12 @@ safe_clean() {
|
||||
fi
|
||||
|
||||
((idx++))
|
||||
# Provide UI feedback periodically
|
||||
if [[ $((idx % 20)) -eq 0 && "$show_spinner" == "true" && -t 1 ]]; then
|
||||
update_progress_if_needed "$idx" "${#existing_paths[@]}" last_progress_update 1 || true
|
||||
last_progress_update=$(date +%s)
|
||||
fi
|
||||
done
|
||||
else
|
||||
# Parallel processing (bash 3.2 compatible)
|
||||
local -a pids=()
|
||||
local idx=0
|
||||
local completed=0
|
||||
@@ -422,12 +399,8 @@ safe_clean() {
|
||||
(
|
||||
local size
|
||||
size=$(get_cleanup_path_size_kb "$path")
|
||||
# Ensure size is numeric (additional safety layer)
|
||||
[[ ! "$size" =~ ^[0-9]+$ ]] && size=0
|
||||
# Use index + PID for unique filename
|
||||
local tmp_file="$temp_dir/result_${idx}.$$"
|
||||
# Optimization: Skip expensive file counting. Size is the key metric.
|
||||
# Just indicate presence if size > 0
|
||||
if [[ "$size" -gt 0 ]]; then
|
||||
echo "$size 1" > "$tmp_file"
|
||||
else
|
||||
@@ -443,7 +416,6 @@ safe_clean() {
|
||||
pids=("${pids[@]:1}")
|
||||
((completed++))
|
||||
|
||||
# Update progress using helper function
|
||||
if [[ "$show_spinner" == "true" && -t 1 ]]; then
|
||||
update_progress_if_needed "$completed" "$total_paths" last_progress_update 2 || true
|
||||
fi
|
||||
@@ -456,7 +428,6 @@ safe_clean() {
|
||||
wait "$pid" 2> /dev/null || true
|
||||
((completed++))
|
||||
|
||||
# Update progress using helper function
|
||||
if [[ "$show_spinner" == "true" && -t 1 ]]; then
|
||||
update_progress_if_needed "$completed" "$total_paths" last_progress_update 2 || true
|
||||
fi
|
||||
@@ -464,24 +435,15 @@ safe_clean() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Read results using same index
|
||||
# Read results back in original order.
|
||||
idx=0
|
||||
if [[ ${#existing_paths[@]} -gt 0 ]]; then
|
||||
for path in "${existing_paths[@]}"; do
|
||||
local result_file="$temp_dir/result_${idx}"
|
||||
if [[ -f "$result_file" ]]; then
|
||||
read -r size count < "$result_file" 2> /dev/null || true
|
||||
# Even if size is 0 or du failed, we should try to remove the file if it was found
|
||||
# count > 0 means the file existed at scan time (or we forced it to 1)
|
||||
|
||||
# Correction: The subshell now writes "size 1" if size>0, or "0 0" if size=0
|
||||
# But we want to delete even if size is 0.
|
||||
# Let's check if the path still exists to be safe, or trust the input list.
|
||||
# Actually, safe_remove checks existence.
|
||||
|
||||
local removed=0
|
||||
if [[ "$DRY_RUN" != "true" ]]; then
|
||||
# Handle symbolic links separately (only remove the link, not the target)
|
||||
if [[ -L "$path" ]]; then
|
||||
rm "$path" 2> /dev/null && removed=1
|
||||
else
|
||||
@@ -500,8 +462,6 @@ safe_clean() {
|
||||
((total_count += 1))
|
||||
removed_any=1
|
||||
else
|
||||
# Only increment failure count if we actually tried and failed
|
||||
# Check existence to avoid false failure report for already gone files
|
||||
if [[ -e "$path" && "$DRY_RUN" != "true" ]]; then
|
||||
((removal_failed_count++))
|
||||
fi
|
||||
@@ -511,22 +471,16 @@ safe_clean() {
|
||||
done
|
||||
fi
|
||||
|
||||
# Temp dir will be auto-cleaned by cleanup_temp_files
|
||||
else
|
||||
local idx=0
|
||||
if [[ ${#existing_paths[@]} -gt 0 ]]; then
|
||||
for path in "${existing_paths[@]}"; do
|
||||
local size_kb
|
||||
size_kb=$(get_cleanup_path_size_kb "$path")
|
||||
# Ensure size_kb is numeric (additional safety layer)
|
||||
[[ ! "$size_kb" =~ ^[0-9]+$ ]] && size_kb=0
|
||||
|
||||
# Optimization: Skip expensive file counting, but DO NOT skip deletion if size is 0
|
||||
# Previously: if [[ "$size_kb" -gt 0 ]]; then ...
|
||||
|
||||
local removed=0
|
||||
if [[ "$DRY_RUN" != "true" ]]; then
|
||||
# Handle symbolic links separately (only remove the link, not the target)
|
||||
if [[ -L "$path" ]]; then
|
||||
rm "$path" 2> /dev/null && removed=1
|
||||
else
|
||||
@@ -545,7 +499,6 @@ safe_clean() {
|
||||
((total_count += 1))
|
||||
removed_any=1
|
||||
else
|
||||
# Only increment failure count if we actually tried and failed
|
||||
if [[ -e "$path" && "$DRY_RUN" != "true" ]]; then
|
||||
((removal_failed_count++))
|
||||
fi
|
||||
@@ -559,13 +512,12 @@ safe_clean() {
|
||||
stop_section_spinner
|
||||
fi
|
||||
|
||||
# Track permission failures reported by safe_remove
|
||||
local permission_end=${MOLE_PERMISSION_DENIED_COUNT:-0}
|
||||
# Track permission failures in debug output (avoid noisy user warnings).
|
||||
if [[ $permission_end -gt $permission_start && $removed_any -eq 0 ]]; then
|
||||
debug_log "Permission denied while cleaning: $description"
|
||||
fi
|
||||
if [[ $removal_failed_count -gt 0 && "$DRY_RUN" != "true" ]]; then
|
||||
# Log to debug instead of showing warning to user (avoid confusion)
|
||||
debug_log "Skipped $removal_failed_count items (permission denied or in use) for: $description"
|
||||
fi
|
||||
|
||||
@@ -580,7 +532,6 @@ safe_clean() {
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} $label ${YELLOW}($size_human dry)${NC}"
|
||||
|
||||
# Group paths by parent directory for export (Bash 3.2 compatible)
|
||||
local paths_temp=$(create_temp_file)
|
||||
|
||||
idx=0
|
||||
@@ -604,9 +555,8 @@ safe_clean() {
|
||||
done
|
||||
fi
|
||||
|
||||
# Group and export paths
|
||||
# Group dry-run paths by parent for a compact export list.
|
||||
if [[ -f "$paths_temp" && -s "$paths_temp" ]]; then
|
||||
# Sort by parent directory to group children together
|
||||
sort -t'|' -k1,1 "$paths_temp" | awk -F'|' '
|
||||
{
|
||||
parent = $1
|
||||
@@ -653,7 +603,6 @@ safe_clean() {
|
||||
|
||||
start_cleanup() {
|
||||
if [[ -t 1 ]]; then
|
||||
# Avoid relying on TERM since CI often runs without it
|
||||
printf '\033[2J\033[H'
|
||||
fi
|
||||
printf '\n'
|
||||
@@ -669,7 +618,6 @@ start_cleanup() {
|
||||
echo ""
|
||||
SYSTEM_CLEAN=false
|
||||
|
||||
# Initialize export list file
|
||||
ensure_user_file "$EXPORT_LIST_FILE"
|
||||
cat > "$EXPORT_LIST_FILE" << EOF
|
||||
# Mole Cleanup Preview - $(date '+%Y-%m-%d %H:%M:%S')
|
||||
@@ -689,22 +637,19 @@ EOF
|
||||
if [[ -t 0 ]]; then
|
||||
echo -ne "${PURPLE}${ICON_ARROW}${NC} System caches need sudo — ${GREEN}Enter${NC} continue, ${GRAY}Space${NC} skip: "
|
||||
|
||||
# Use read_key to properly handle all key inputs
|
||||
local choice
|
||||
choice=$(read_key)
|
||||
|
||||
# Check for cancel (ESC or Q)
|
||||
# ESC/Q aborts, Space skips, Enter enables system cleanup.
|
||||
if [[ "$choice" == "QUIT" ]]; then
|
||||
echo -e " ${GRAY}Canceled${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Space = skip
|
||||
if [[ "$choice" == "SPACE" ]]; then
|
||||
echo -e " ${GRAY}Skipped${NC}"
|
||||
echo ""
|
||||
SYSTEM_CLEAN=false
|
||||
# Enter = yes, do system cleanup
|
||||
elif [[ "$choice" == "ENTER" ]]; then
|
||||
printf "\r\033[K" # Clear the prompt line
|
||||
if ensure_sudo_session "System cleanup requires admin access"; then
|
||||
@@ -717,7 +662,6 @@ EOF
|
||||
echo -e "${YELLOW}Authentication failed${NC}, continuing with user-level cleanup"
|
||||
fi
|
||||
else
|
||||
# Other keys (including arrow keys) = skip, no message needed
|
||||
SYSTEM_CLEAN=false
|
||||
echo -e " ${GRAY}Skipped${NC}"
|
||||
echo ""
|
||||
@@ -732,10 +676,8 @@ EOF
|
||||
fi
|
||||
}
|
||||
|
||||
# Clean Service Worker CacheStorage with domain protection
|
||||
|
||||
perform_cleanup() {
|
||||
# Fast test mode for CI/testing - skip expensive scans
|
||||
# Test mode skips expensive scans and returns minimal output.
|
||||
local test_mode_enabled=false
|
||||
if [[ "${MOLE_TEST_MODE:-0}" == "1" ]]; then
|
||||
test_mode_enabled=true
|
||||
@@ -743,10 +685,8 @@ perform_cleanup() {
|
||||
echo -e "${YELLOW}Dry Run Mode${NC} - Preview only, no deletions"
|
||||
echo ""
|
||||
fi
|
||||
# Show minimal output to satisfy test assertions
|
||||
echo -e "${GREEN}${ICON_LIST}${NC} User app cache"
|
||||
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||
# Check if any custom patterns exist (not defaults)
|
||||
local -a expanded_defaults
|
||||
expanded_defaults=()
|
||||
for default in "${DEFAULT_WHITELIST_PATTERNS[@]}"; do
|
||||
@@ -771,16 +711,13 @@ perform_cleanup() {
|
||||
total_items=1
|
||||
files_cleaned=0
|
||||
total_size_cleaned=0
|
||||
# Don't return early - continue to summary block for debug log output
|
||||
fi
|
||||
|
||||
if [[ "$test_mode_enabled" == "false" ]]; then
|
||||
echo -e "${BLUE}${ICON_ADMIN}${NC} $(detect_architecture) | Free space: $(get_free_space)"
|
||||
fi
|
||||
|
||||
# Skip all expensive operations in test mode
|
||||
if [[ "$test_mode_enabled" == "true" ]]; then
|
||||
# Jump to summary block
|
||||
local summary_heading="Test mode complete"
|
||||
local -a summary_details
|
||||
summary_details=()
|
||||
@@ -790,13 +727,10 @@ perform_cleanup() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Pre-check TCC permissions upfront (delegated to clean_caches module)
|
||||
# Pre-check TCC permissions to avoid mid-run prompts.
|
||||
check_tcc_permissions
|
||||
|
||||
# Show whitelist info if patterns are active
|
||||
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||
# Count predefined vs custom patterns
|
||||
# Note: WHITELIST_PATTERNS are already expanded, need to expand defaults for comparison
|
||||
local predefined_count=0
|
||||
local custom_count=0
|
||||
|
||||
@@ -817,7 +751,6 @@ perform_cleanup() {
|
||||
fi
|
||||
done
|
||||
|
||||
# Display whitelist status
|
||||
if [[ $custom_count -gt 0 || $predefined_count -gt 0 ]]; then
|
||||
local summary=""
|
||||
[[ $predefined_count -gt 0 ]] && summary+="$predefined_count core"
|
||||
@@ -827,10 +760,8 @@ perform_cleanup() {
|
||||
|
||||
echo -e "${BLUE}${ICON_SUCCESS}${NC} Whitelist: $summary"
|
||||
|
||||
# List all whitelist patterns in dry-run mode for verification (Issue #206)
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
for pattern in "${WHITELIST_PATTERNS[@]}"; do
|
||||
# Skip FINDER_METADATA sentinel
|
||||
[[ "$pattern" == "$FINDER_METADATA_SENTINEL" ]] && continue
|
||||
echo -e " ${GRAY}→ $pattern${NC}"
|
||||
done
|
||||
@@ -838,7 +769,6 @@ perform_cleanup() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Hint about Full Disk Access for better results (only if not already granted)
|
||||
if [[ -t 1 && "$DRY_RUN" != "true" ]]; then
|
||||
local fda_status=0
|
||||
has_full_disk_access
|
||||
@@ -856,20 +786,17 @@ perform_cleanup() {
|
||||
local had_errexit=0
|
||||
[[ $- == *e* ]] && had_errexit=1
|
||||
|
||||
# Allow cleanup functions to fail without exiting the script
|
||||
# Individual operations use || true for granular error handling
|
||||
# Allow per-section failures without aborting the full run.
|
||||
set +e
|
||||
|
||||
# ===== 1. Deep system cleanup (if admin) - Do this first while sudo is fresh =====
|
||||
# ===== 1. Deep system cleanup (if admin) =====
|
||||
if [[ "$SYSTEM_CLEAN" == "true" ]]; then
|
||||
start_section "Deep system"
|
||||
# Deep system cleanup (delegated to clean_system module)
|
||||
clean_deep_system
|
||||
clean_local_snapshots
|
||||
end_section
|
||||
fi
|
||||
|
||||
# Show whitelist warnings if any
|
||||
if [[ ${#WHITELIST_WARNINGS[@]} -gt 0 ]]; then
|
||||
echo ""
|
||||
for warning in "${WHITELIST_WARNINGS[@]}"; do
|
||||
@@ -877,21 +804,17 @@ perform_cleanup() {
|
||||
done
|
||||
fi
|
||||
|
||||
# ===== 2. User essentials =====
|
||||
start_section "User essentials"
|
||||
# User essentials cleanup (delegated to clean_user_data module)
|
||||
clean_user_essentials
|
||||
scan_external_volumes
|
||||
end_section
|
||||
|
||||
start_section "Finder metadata"
|
||||
# Finder metadata cleanup (delegated to clean_user_data module)
|
||||
clean_finder_metadata
|
||||
end_section
|
||||
|
||||
# ===== 3. macOS system caches =====
|
||||
start_section "macOS system caches"
|
||||
# macOS system caches cleanup (delegated to clean_user_data module)
|
||||
clean_macos_system_caches
|
||||
clean_recent_items
|
||||
clean_mail_downloads
|
||||
@@ -899,55 +822,45 @@ perform_cleanup() {
|
||||
|
||||
# ===== 4. Sandboxed app caches =====
|
||||
start_section "Sandboxed app caches"
|
||||
# Sandboxed app caches cleanup (delegated to clean_user_data module)
|
||||
clean_sandboxed_app_caches
|
||||
end_section
|
||||
|
||||
# ===== 5. Browsers =====
|
||||
start_section "Browsers"
|
||||
# Browser caches cleanup (delegated to clean_user_data module)
|
||||
clean_browsers
|
||||
end_section
|
||||
|
||||
# ===== 6. Cloud storage =====
|
||||
start_section "Cloud storage"
|
||||
# Cloud storage caches cleanup (delegated to clean_user_data module)
|
||||
clean_cloud_storage
|
||||
end_section
|
||||
|
||||
# ===== 7. Office applications =====
|
||||
start_section "Office applications"
|
||||
# Office applications cleanup (delegated to clean_user_data module)
|
||||
clean_office_applications
|
||||
end_section
|
||||
|
||||
# ===== 8. Developer tools =====
|
||||
start_section "Developer tools"
|
||||
# Developer tools cleanup (delegated to clean_dev module)
|
||||
clean_developer_tools
|
||||
end_section
|
||||
|
||||
# ===== 9. Development applications =====
|
||||
start_section "Development applications"
|
||||
# User GUI applications cleanup (delegated to clean_user_apps module)
|
||||
clean_user_gui_applications
|
||||
end_section
|
||||
|
||||
# ===== 10. Virtualization tools =====
|
||||
start_section "Virtual machine tools"
|
||||
# Virtualization tools cleanup (delegated to clean_user_data module)
|
||||
clean_virtualization_tools
|
||||
end_section
|
||||
|
||||
# ===== 11. Application Support logs and caches cleanup =====
|
||||
start_section "Application Support"
|
||||
# Clean logs, Service Worker caches, Code Cache, Crashpad, stale updates, Group Containers
|
||||
clean_application_support_logs
|
||||
end_section
|
||||
|
||||
# ===== 12. Orphaned app data cleanup =====
|
||||
# Only touch apps missing from scan + 60+ days inactive
|
||||
# Skip protected vendors, keep Preferences/Application Support
|
||||
# ===== 12. Orphaned app data cleanup (60+ days inactive, skip protected vendors) =====
|
||||
start_section "Uninstalled app data"
|
||||
clean_orphaned_app_data
|
||||
end_section
|
||||
@@ -957,13 +870,11 @@ perform_cleanup() {
|
||||
|
||||
# ===== 14. iOS device backups =====
|
||||
start_section "iOS device backups"
|
||||
# iOS device backups check (delegated to clean_user_data module)
|
||||
check_ios_device_backups
|
||||
end_section
|
||||
|
||||
# ===== 15. Time Machine incomplete backups =====
|
||||
start_section "Time Machine incomplete backups"
|
||||
# Time Machine incomplete backups cleanup (delegated to clean_system module)
|
||||
clean_time_machine_failed_backups
|
||||
end_section
|
||||
|
||||
@@ -972,11 +883,11 @@ perform_cleanup() {
|
||||
|
||||
local summary_heading=""
|
||||
local summary_status="success"
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
summary_heading="Dry run complete - no changes made"
|
||||
else
|
||||
summary_heading="Cleanup complete"
|
||||
fi
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
summary_heading="Dry run complete - no changes made"
|
||||
else
|
||||
summary_heading="Cleanup complete"
|
||||
fi
|
||||
|
||||
local -a summary_details=()
|
||||
|
||||
@@ -985,13 +896,11 @@ perform_cleanup() {
|
||||
freed_gb=$(echo "$total_size_cleaned" | awk '{printf "%.2f", $1/1024/1024}')
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
# Build compact stats line for dry run
|
||||
local stats="Potential space: ${GREEN}${freed_gb}GB${NC}"
|
||||
[[ $files_cleaned -gt 0 ]] && stats+=" | Items: $files_cleaned"
|
||||
[[ $total_items -gt 0 ]] && stats+=" | Categories: $total_items"
|
||||
summary_details+=("$stats")
|
||||
|
||||
# Add summary to export file
|
||||
{
|
||||
echo ""
|
||||
echo "# ============================================"
|
||||
@@ -1005,7 +914,6 @@ perform_cleanup() {
|
||||
summary_details+=("Detailed file list: ${GRAY}$EXPORT_LIST_FILE${NC}")
|
||||
summary_details+=("Use ${GRAY}mo clean --whitelist${NC} to add protection rules")
|
||||
else
|
||||
# Build summary line: Space freed + Items cleaned
|
||||
local summary_line="Space freed: ${GREEN}${freed_gb}GB${NC}"
|
||||
|
||||
if [[ $files_cleaned -gt 0 && $total_items -gt 0 ]]; then
|
||||
@@ -1026,7 +934,6 @@ perform_cleanup() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Free space now at the end
|
||||
local final_free_space=$(get_free_space)
|
||||
summary_details+=("Free space now: $final_free_space")
|
||||
fi
|
||||
@@ -1040,7 +947,6 @@ perform_cleanup() {
|
||||
summary_details+=("Free space now: $(get_free_space)")
|
||||
fi
|
||||
|
||||
# Restore strict error handling only if it was enabled
|
||||
if [[ $had_errexit -eq 1 ]]; then
|
||||
set -e
|
||||
fi
|
||||
|
||||
@@ -1,15 +1,18 @@
|
||||
#!/bin/bash
|
||||
# Mole - Optimize command.
|
||||
# Runs system maintenance checks and fixes.
|
||||
# Supports dry-run where applicable.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Fix locale issues (Issue #83)
|
||||
# Fix locale issues.
|
||||
export LC_ALL=C
|
||||
export LANG=C
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
source "$SCRIPT_DIR/lib/core/common.sh"
|
||||
|
||||
# Set up cleanup trap for temporary files
|
||||
# Clean temp files on exit.
|
||||
trap cleanup_temp_files EXIT INT TERM
|
||||
source "$SCRIPT_DIR/lib/core/sudo.sh"
|
||||
source "$SCRIPT_DIR/lib/manage/update.sh"
|
||||
@@ -26,7 +29,7 @@ print_header() {
|
||||
}
|
||||
|
||||
run_system_checks() {
|
||||
# Skip system checks in dry-run mode (only show what optimizations would run)
|
||||
# Skip checks in dry-run mode.
|
||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||
return 0
|
||||
fi
|
||||
@@ -36,7 +39,6 @@ run_system_checks() {
|
||||
unset MOLE_SECURITY_FIXES_SKIPPED
|
||||
echo ""
|
||||
|
||||
# Run checks and display results directly without grouping
|
||||
check_all_updates
|
||||
echo ""
|
||||
|
||||
@@ -152,7 +154,7 @@ touchid_supported() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Fallback: Apple Silicon Macs usually have Touch ID
|
||||
# Fallback: Apple Silicon Macs usually have Touch ID.
|
||||
if [[ "$(uname -m)" == "arm64" ]]; then
|
||||
return 0
|
||||
fi
|
||||
@@ -354,7 +356,7 @@ main() {
|
||||
fi
|
||||
print_header
|
||||
|
||||
# Show dry-run mode indicator
|
||||
# Dry-run indicator.
|
||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||
echo -e "${YELLOW}${ICON_DRY_RUN} DRY RUN MODE${NC} - No files will be modified\n"
|
||||
fi
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Mole - Project purge command (mo purge)
|
||||
# Remove old project build artifacts and dependencies
|
||||
# Mole - Purge command.
|
||||
# Cleans heavy project build artifacts.
|
||||
# Interactive selection by project.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Entry point for the Go-based system status panel bundled with Mole.
|
||||
# Mole - Status command.
|
||||
# Runs the Go system status panel.
|
||||
# Shows live system metrics.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Mole - Touch ID Configuration Helper
|
||||
# Automatically configure Touch ID for sudo
|
||||
# Mole - Touch ID command.
|
||||
# Configures sudo with Touch ID.
|
||||
# Guided toggle with safety checks.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
|
||||
@@ -1,28 +1,25 @@
|
||||
#!/bin/bash
|
||||
# Mole - Uninstall Module
|
||||
# Interactive application uninstaller with keyboard navigation
|
||||
#
|
||||
# Usage:
|
||||
# uninstall.sh # Launch interactive uninstaller
|
||||
# uninstall.sh # Launch interactive uninstaller
|
||||
# Mole - Uninstall command.
|
||||
# Interactive app uninstaller.
|
||||
# Removes app files and leftovers.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Fix locale issues (avoid Perl warnings on non-English systems)
|
||||
# Fix locale issues on non-English systems.
|
||||
export LC_ALL=C
|
||||
export LANG=C
|
||||
|
||||
# Get script directory and source common functions
|
||||
# Load shared helpers.
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/../lib/core/common.sh"
|
||||
|
||||
# Set up cleanup trap for temporary files
|
||||
# Clean temp files on exit.
|
||||
trap cleanup_temp_files EXIT INT TERM
|
||||
source "$SCRIPT_DIR/../lib/ui/menu_paginated.sh"
|
||||
source "$SCRIPT_DIR/../lib/ui/app_selector.sh"
|
||||
source "$SCRIPT_DIR/../lib/uninstall/batch.sh"
|
||||
|
||||
# Initialize global variables
|
||||
# State
|
||||
selected_apps=()
|
||||
declare -a apps_data=()
|
||||
declare -a selection_state=()
|
||||
@@ -30,10 +27,9 @@ total_items=0
|
||||
files_cleaned=0
|
||||
total_size_cleaned=0
|
||||
|
||||
# Scan applications and collect information
|
||||
# Scan applications and collect information.
|
||||
scan_applications() {
|
||||
# Application scan with intelligent caching (24h TTL)
|
||||
# This speeds up repeated scans significantly by caching app metadata
|
||||
# Cache app scan (24h TTL).
|
||||
local cache_dir="$HOME/.cache/mole"
|
||||
local cache_file="$cache_dir/app_scan_cache"
|
||||
local cache_ttl=86400 # 24 hours
|
||||
@@ -41,12 +37,10 @@ scan_applications() {
|
||||
|
||||
ensure_user_dir "$cache_dir"
|
||||
|
||||
# Check if cache exists and is fresh
|
||||
if [[ $force_rescan == false && -f "$cache_file" ]]; then
|
||||
local cache_age=$(($(date +%s) - $(get_file_mtime "$cache_file")))
|
||||
[[ $cache_age -eq $(date +%s) ]] && cache_age=86401 # Handle mtime read failure
|
||||
if [[ $cache_age -lt $cache_ttl ]]; then
|
||||
# Cache hit - show brief feedback and return cached results
|
||||
if [[ -t 2 ]]; then
|
||||
echo -e "${GREEN}Loading from cache...${NC}" >&2
|
||||
sleep 0.3 # Brief pause so user sees the message
|
||||
@@ -56,7 +50,6 @@ scan_applications() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Cache miss - perform full scan
|
||||
local inline_loading=false
|
||||
if [[ -t 1 && -t 2 ]]; then
|
||||
inline_loading=true
|
||||
@@ -66,12 +59,10 @@ scan_applications() {
|
||||
local temp_file
|
||||
temp_file=$(create_temp_file)
|
||||
|
||||
# Pre-cache current epoch to avoid repeated date calls
|
||||
local current_epoch
|
||||
current_epoch=$(date "+%s")
|
||||
|
||||
# First pass: quickly collect all valid app paths and bundle IDs
|
||||
# This pass does NOT call mdls (slow) - only reads plists (fast)
|
||||
# Pass 1: collect app paths and bundle IDs (no mdls).
|
||||
local -a app_data_tuples=()
|
||||
local -a app_dirs=(
|
||||
"/Applications"
|
||||
@@ -104,37 +95,31 @@ scan_applications() {
|
||||
local app_name
|
||||
app_name=$(basename "$app_path" .app)
|
||||
|
||||
# Skip nested apps (e.g. inside Wrapper/ or Frameworks/ of another app)
|
||||
# Check if parent path component ends in .app (e.g. /Foo.app/Bar.app or /Foo.app/Contents/Bar.app)
|
||||
# This prevents false positives like /Old.apps/Target.app
|
||||
# Skip nested apps inside another .app bundle.
|
||||
local parent_dir
|
||||
parent_dir=$(dirname "$app_path")
|
||||
if [[ "$parent_dir" == *".app" || "$parent_dir" == *".app/"* ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Get bundle ID (fast plist read, no mdls call yet)
|
||||
# Bundle ID from plist (fast path).
|
||||
local bundle_id="unknown"
|
||||
if [[ -f "$app_path/Contents/Info.plist" ]]; then
|
||||
bundle_id=$(defaults read "$app_path/Contents/Info.plist" CFBundleIdentifier 2> /dev/null || echo "unknown")
|
||||
fi
|
||||
|
||||
# Skip system critical apps (input methods, system components, etc.)
|
||||
if should_protect_from_uninstall "$bundle_id"; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Store tuple: app_path|app_name|bundle_id
|
||||
# Display name and metadata will be resolved in parallel later (second pass)
|
||||
# Store tuple for pass 2 (metadata + size).
|
||||
app_data_tuples+=("${app_path}|${app_name}|${bundle_id}")
|
||||
done < <(command find "$app_dir" -name "*.app" -maxdepth 3 -print0 2> /dev/null)
|
||||
done
|
||||
|
||||
# Second pass: process each app with parallel metadata extraction
|
||||
# This pass calls mdls (slow) and calculates sizes, but does so in parallel
|
||||
# Pass 2: metadata + size in parallel (mdls is slow).
|
||||
local app_count=0
|
||||
local total_apps=${#app_data_tuples[@]}
|
||||
# Bound parallelism - for metadata queries, can go higher since it's mostly waiting
|
||||
local max_parallel
|
||||
max_parallel=$(get_optimal_parallel_jobs "io")
|
||||
if [[ $max_parallel -lt 8 ]]; then
|
||||
@@ -151,25 +136,17 @@ scan_applications() {
|
||||
|
||||
IFS='|' read -r app_path app_name bundle_id <<< "$app_data_tuple"
|
||||
|
||||
# Get localized display name (moved from first pass for better performance)
|
||||
# Priority order for name selection (prefer localized names):
|
||||
# 1. System metadata display name (kMDItemDisplayName) - respects system language
|
||||
# 2. CFBundleDisplayName - usually localized
|
||||
# 3. CFBundleName - fallback
|
||||
# 4. App folder name - last resort
|
||||
# Display name priority: mdls display name → bundle display → bundle name → folder.
|
||||
local display_name="$app_name"
|
||||
if [[ -f "$app_path/Contents/Info.plist" ]]; then
|
||||
# Try to get localized name from system metadata (best for i18n)
|
||||
local md_display_name
|
||||
md_display_name=$(run_with_timeout 0.05 mdls -name kMDItemDisplayName -raw "$app_path" 2> /dev/null || echo "")
|
||||
|
||||
# Get bundle names from plist
|
||||
local bundle_display_name
|
||||
bundle_display_name=$(plutil -extract CFBundleDisplayName raw "$app_path/Contents/Info.plist" 2> /dev/null)
|
||||
local bundle_name
|
||||
bundle_name=$(plutil -extract CFBundleName raw "$app_path/Contents/Info.plist" 2> /dev/null)
|
||||
|
||||
# Sanitize metadata values (prevent paths, pipes, and newlines)
|
||||
if [[ "$md_display_name" == /* ]]; then md_display_name=""; fi
|
||||
md_display_name="${md_display_name//|/-}"
|
||||
md_display_name="${md_display_name//[$'\t\r\n']/}"
|
||||
@@ -180,7 +157,6 @@ scan_applications() {
|
||||
bundle_name="${bundle_name//|/-}"
|
||||
bundle_name="${bundle_name//[$'\t\r\n']/}"
|
||||
|
||||
# Select best available name
|
||||
if [[ -n "$md_display_name" && "$md_display_name" != "(null)" && "$md_display_name" != "$app_name" ]]; then
|
||||
display_name="$md_display_name"
|
||||
elif [[ -n "$bundle_display_name" && "$bundle_display_name" != "(null)" ]]; then
|
||||
@@ -190,29 +166,25 @@ scan_applications() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Final safety check: if display_name looks like a path, revert to app_name
|
||||
if [[ "$display_name" == /* ]]; then
|
||||
display_name="$app_name"
|
||||
fi
|
||||
# Ensure no pipes or newlines in final display name
|
||||
display_name="${display_name//|/-}"
|
||||
display_name="${display_name//[$'\t\r\n']/}"
|
||||
|
||||
# Calculate app size (in parallel for performance)
|
||||
# App size (KB → human).
|
||||
local app_size="N/A"
|
||||
local app_size_kb="0"
|
||||
if [[ -d "$app_path" ]]; then
|
||||
# Get size in KB, then format for display
|
||||
app_size_kb=$(get_path_size_kb "$app_path")
|
||||
app_size=$(bytes_to_human "$((app_size_kb * 1024))")
|
||||
fi
|
||||
|
||||
# Get last used date with fallback strategy
|
||||
# Last used: mdls (fast timeout) → mtime.
|
||||
local last_used="Never"
|
||||
local last_used_epoch=0
|
||||
|
||||
if [[ -d "$app_path" ]]; then
|
||||
# Try mdls first with short timeout (0.1s) for accuracy, fallback to mtime for speed
|
||||
local metadata_date
|
||||
metadata_date=$(run_with_timeout 0.1 mdls -name kMDItemLastUsedDate -raw "$app_path" 2> /dev/null || echo "")
|
||||
|
||||
@@ -220,7 +192,6 @@ scan_applications() {
|
||||
last_used_epoch=$(date -j -f "%Y-%m-%d %H:%M:%S %z" "$metadata_date" "+%s" 2> /dev/null || echo "0")
|
||||
fi
|
||||
|
||||
# Fallback if mdls failed or returned nothing
|
||||
if [[ "$last_used_epoch" -eq 0 ]]; then
|
||||
last_used_epoch=$(get_file_mtime "$app_path")
|
||||
fi
|
||||
@@ -276,7 +247,6 @@ scan_applications() {
|
||||
) &
|
||||
spinner_pid=$!
|
||||
|
||||
# Process apps in parallel batches
|
||||
for app_data_tuple in "${app_data_tuples[@]}"; do
|
||||
((app_count++))
|
||||
process_app_metadata "$app_data_tuple" "$temp_file" "$current_epoch" &
|
||||
@@ -368,7 +338,7 @@ load_applications() {
|
||||
return 0
|
||||
}
|
||||
|
||||
# Cleanup function - restore cursor and clean up
|
||||
# Cleanup: restore cursor and kill keepalive.
|
||||
cleanup() {
|
||||
if [[ "${MOLE_ALT_SCREEN_ACTIVE:-}" == "1" ]]; then
|
||||
leave_alt_screen
|
||||
@@ -387,7 +357,7 @@ trap cleanup EXIT INT TERM
|
||||
|
||||
main() {
|
||||
local force_rescan=false
|
||||
# Parse global flags locally if needed (currently none specific to uninstall)
|
||||
# Global flags
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
"--debug")
|
||||
@@ -403,7 +373,6 @@ main() {
|
||||
|
||||
hide_cursor
|
||||
|
||||
# Main interaction loop
|
||||
while true; do
|
||||
local needs_scanning=true
|
||||
local cache_file="$HOME/.cache/mole/app_scan_cache"
|
||||
|
||||
@@ -75,11 +75,6 @@ func TestScanPathConcurrentBasic(t *testing.T) {
|
||||
if bytes := atomic.LoadInt64(&bytesScanned); bytes == 0 {
|
||||
t.Fatalf("expected byte counter to increase")
|
||||
}
|
||||
// current path update is throttled, so it might be empty for small scans
|
||||
// if current == "" {
|
||||
// t.Fatalf("expected current path to be updated")
|
||||
// }
|
||||
|
||||
foundSymlink := false
|
||||
for _, entry := range result.Entries {
|
||||
if strings.HasSuffix(entry.Name, " →") {
|
||||
@@ -148,7 +143,7 @@ func TestOverviewStoreAndLoad(t *testing.T) {
|
||||
t.Fatalf("snapshot mismatch: want %d, got %d", want, got)
|
||||
}
|
||||
|
||||
// Force reload from disk and ensure value persists.
|
||||
// Reload from disk and ensure value persists.
|
||||
resetOverviewSnapshotForTest()
|
||||
got, err = loadStoredOverviewSize(path)
|
||||
if err != nil {
|
||||
@@ -220,7 +215,7 @@ func TestMeasureOverviewSize(t *testing.T) {
|
||||
t.Fatalf("expected positive size, got %d", size)
|
||||
}
|
||||
|
||||
// Ensure snapshot stored
|
||||
// Ensure snapshot stored.
|
||||
cached, err := loadStoredOverviewSize(target)
|
||||
if err != nil {
|
||||
t.Fatalf("loadStoredOverviewSize: %v", err)
|
||||
@@ -279,13 +274,13 @@ func TestLoadCacheExpiresWhenDirectoryChanges(t *testing.T) {
|
||||
t.Fatalf("saveCacheToDisk: %v", err)
|
||||
}
|
||||
|
||||
// Touch directory to advance mtime beyond grace period.
|
||||
// Advance mtime beyond grace period.
|
||||
time.Sleep(time.Millisecond * 10)
|
||||
if err := os.Chtimes(target, time.Now(), time.Now()); err != nil {
|
||||
t.Fatalf("chtimes: %v", err)
|
||||
}
|
||||
|
||||
// Force modtime difference beyond grace window by simulating an older cache entry.
|
||||
// Simulate older cache entry to exceed grace window.
|
||||
cachePath, err := getCachePath(target)
|
||||
if err != nil {
|
||||
t.Fatalf("getCachePath: %v", err)
|
||||
@@ -335,24 +330,24 @@ func TestScanPathPermissionError(t *testing.T) {
|
||||
t.Fatalf("create locked dir: %v", err)
|
||||
}
|
||||
|
||||
// Create a file inside before locking, just to be sure
|
||||
// Create a file before locking.
|
||||
if err := os.WriteFile(filepath.Join(lockedDir, "secret.txt"), []byte("shh"), 0o644); err != nil {
|
||||
t.Fatalf("write secret: %v", err)
|
||||
}
|
||||
|
||||
// Remove permissions
|
||||
// Remove permissions.
|
||||
if err := os.Chmod(lockedDir, 0o000); err != nil {
|
||||
t.Fatalf("chmod 000: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
// Restore permissions so cleanup can work
|
||||
// Restore permissions for cleanup.
|
||||
_ = os.Chmod(lockedDir, 0o755)
|
||||
}()
|
||||
|
||||
var files, dirs, bytes int64
|
||||
current := ""
|
||||
|
||||
// Scanning the locked dir itself should fail
|
||||
// Scanning the locked dir itself should fail.
|
||||
_, err := scanPathConcurrent(lockedDir, &files, &dirs, &bytes, ¤t)
|
||||
if err == nil {
|
||||
t.Fatalf("expected error scanning locked directory, got nil")
|
||||
|
||||
@@ -222,7 +222,7 @@ func loadCacheFromDisk(path string) (*cacheEntry, error) {
|
||||
}
|
||||
|
||||
if info.ModTime().After(entry.ModTime) {
|
||||
// Only expire cache if the directory has been newer for longer than the grace window.
|
||||
// Allow grace window.
|
||||
if cacheModTimeGrace <= 0 || info.ModTime().Sub(entry.ModTime) > cacheModTimeGrace {
|
||||
return nil, fmt.Errorf("cache expired: directory modified")
|
||||
}
|
||||
@@ -290,29 +290,23 @@ func removeOverviewSnapshot(path string) {
|
||||
}
|
||||
}
|
||||
|
||||
// prefetchOverviewCache scans overview directories in background
|
||||
// to populate cache for faster overview mode access
|
||||
// prefetchOverviewCache warms overview cache in background.
|
||||
func prefetchOverviewCache(ctx context.Context) {
|
||||
entries := createOverviewEntries()
|
||||
|
||||
// Check which entries need refresh
|
||||
var needScan []string
|
||||
for _, entry := range entries {
|
||||
// Skip if we have fresh cache
|
||||
if size, err := loadStoredOverviewSize(entry.Path); err == nil && size > 0 {
|
||||
continue
|
||||
}
|
||||
needScan = append(needScan, entry.Path)
|
||||
}
|
||||
|
||||
// Nothing to scan
|
||||
if len(needScan) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// Scan and cache in background with context cancellation support
|
||||
for _, path := range needScan {
|
||||
// Check if context is cancelled
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
|
||||
@@ -5,23 +5,20 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// isCleanableDir checks if a directory is safe to manually delete
|
||||
// but NOT cleaned by mo clean (so user might want to delete it manually)
|
||||
// isCleanableDir marks paths safe to delete manually (not handled by mo clean).
|
||||
func isCleanableDir(path string) bool {
|
||||
if path == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
// Exclude paths that mo clean will handle automatically
|
||||
// These are system caches/logs that mo clean already processes
|
||||
// Exclude paths mo clean already handles.
|
||||
if isHandledByMoClean(path) {
|
||||
return false
|
||||
}
|
||||
|
||||
baseName := filepath.Base(path)
|
||||
|
||||
// Only mark project dependencies and build outputs
|
||||
// These are safe to delete but mo clean won't touch them
|
||||
// Project dependencies and build outputs are safe.
|
||||
if projectDependencyDirs[baseName] {
|
||||
return true
|
||||
}
|
||||
@@ -29,9 +26,8 @@ func isCleanableDir(path string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// isHandledByMoClean checks if this path will be cleaned by mo clean
|
||||
// isHandledByMoClean checks if a path is cleaned by mo clean.
|
||||
func isHandledByMoClean(path string) bool {
|
||||
// Paths that mo clean handles (from clean.sh)
|
||||
cleanPaths := []string{
|
||||
"/Library/Caches/",
|
||||
"/Library/Logs/",
|
||||
@@ -49,16 +45,15 @@ func isHandledByMoClean(path string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// Project dependency and build directories
|
||||
// These are safe to delete manually but mo clean won't touch them
|
||||
// Project dependency and build directories.
|
||||
var projectDependencyDirs = map[string]bool{
|
||||
// JavaScript/Node dependencies
|
||||
"node_modules": true,
|
||||
// JavaScript/Node.
|
||||
"node_modules": true,
|
||||
"bower_components": true,
|
||||
".yarn": true, // Yarn local cache
|
||||
".pnpm-store": true, // pnpm store
|
||||
".yarn": true,
|
||||
".pnpm-store": true,
|
||||
|
||||
// Python dependencies and outputs
|
||||
// Python.
|
||||
"venv": true,
|
||||
".venv": true,
|
||||
"virtualenv": true,
|
||||
@@ -68,18 +63,18 @@ var projectDependencyDirs = map[string]bool{
|
||||
".ruff_cache": true,
|
||||
".tox": true,
|
||||
".eggs": true,
|
||||
"htmlcov": true, // Coverage reports
|
||||
".ipynb_checkpoints": true, // Jupyter checkpoints
|
||||
"htmlcov": true,
|
||||
".ipynb_checkpoints": true,
|
||||
|
||||
// Ruby dependencies
|
||||
// Ruby.
|
||||
"vendor": true,
|
||||
".bundle": true,
|
||||
|
||||
// Java/Kotlin/Scala
|
||||
".gradle": true, // Project-level Gradle cache
|
||||
"out": true, // IntelliJ IDEA build output
|
||||
// Java/Kotlin/Scala.
|
||||
".gradle": true,
|
||||
"out": true,
|
||||
|
||||
// Build outputs (can be rebuilt)
|
||||
// Build outputs.
|
||||
"build": true,
|
||||
"dist": true,
|
||||
"target": true,
|
||||
@@ -88,25 +83,25 @@ var projectDependencyDirs = map[string]bool{
|
||||
".output": true,
|
||||
".parcel-cache": true,
|
||||
".turbo": true,
|
||||
".vite": true, // Vite cache
|
||||
".nx": true, // Nx cache
|
||||
".vite": true,
|
||||
".nx": true,
|
||||
"coverage": true,
|
||||
".coverage": true,
|
||||
".nyc_output": true, // NYC coverage
|
||||
".nyc_output": true,
|
||||
|
||||
// Frontend framework outputs
|
||||
".angular": true, // Angular CLI cache
|
||||
".svelte-kit": true, // SvelteKit build
|
||||
".astro": true, // Astro cache
|
||||
".docusaurus": true, // Docusaurus build
|
||||
// Frontend framework outputs.
|
||||
".angular": true,
|
||||
".svelte-kit": true,
|
||||
".astro": true,
|
||||
".docusaurus": true,
|
||||
|
||||
// iOS/macOS development
|
||||
// Apple dev.
|
||||
"DerivedData": true,
|
||||
"Pods": true,
|
||||
".build": true,
|
||||
"Carthage": true,
|
||||
".dart_tool": true,
|
||||
|
||||
// Other tools
|
||||
".terraform": true, // Terraform plugins
|
||||
// Other tools.
|
||||
".terraform": true,
|
||||
}
|
||||
|
||||
@@ -6,35 +6,35 @@ const (
|
||||
maxEntries = 30
|
||||
maxLargeFiles = 30
|
||||
barWidth = 24
|
||||
minLargeFileSize = 100 << 20 // 100 MB
|
||||
defaultViewport = 12 // Default viewport when terminal height is unknown
|
||||
overviewCacheTTL = 7 * 24 * time.Hour // 7 days
|
||||
minLargeFileSize = 100 << 20
|
||||
defaultViewport = 12
|
||||
overviewCacheTTL = 7 * 24 * time.Hour
|
||||
overviewCacheFile = "overview_sizes.json"
|
||||
duTimeout = 30 * time.Second // Fail faster to fallback to concurrent scan
|
||||
duTimeout = 30 * time.Second
|
||||
mdlsTimeout = 5 * time.Second
|
||||
maxConcurrentOverview = 8 // Increased parallel overview scans
|
||||
batchUpdateSize = 100 // Batch atomic updates every N items
|
||||
cacheModTimeGrace = 30 * time.Minute // Ignore minor directory mtime bumps
|
||||
maxConcurrentOverview = 8
|
||||
batchUpdateSize = 100
|
||||
cacheModTimeGrace = 30 * time.Minute
|
||||
|
||||
// Worker pool configuration
|
||||
minWorkers = 16 // Safe baseline for older machines
|
||||
maxWorkers = 64 // Cap at 64 to avoid OS resource contention
|
||||
cpuMultiplier = 4 // Balanced CPU usage
|
||||
maxDirWorkers = 32 // Limit concurrent subdirectory scans
|
||||
openCommandTimeout = 10 * time.Second // Timeout for open/reveal commands
|
||||
// Worker pool limits.
|
||||
minWorkers = 16
|
||||
maxWorkers = 64
|
||||
cpuMultiplier = 4
|
||||
maxDirWorkers = 32
|
||||
openCommandTimeout = 10 * time.Second
|
||||
)
|
||||
|
||||
var foldDirs = map[string]bool{
|
||||
// Version control
|
||||
// VCS.
|
||||
".git": true,
|
||||
".svn": true,
|
||||
".hg": true,
|
||||
|
||||
// JavaScript/Node
|
||||
// JavaScript/Node.
|
||||
"node_modules": true,
|
||||
".npm": true,
|
||||
"_npx": true, // ~/.npm/_npx global cache
|
||||
"_cacache": true, // ~/.npm/_cacache
|
||||
"_npx": true,
|
||||
"_cacache": true,
|
||||
"_logs": true,
|
||||
"_locks": true,
|
||||
"_quick": true,
|
||||
@@ -56,7 +56,7 @@ var foldDirs = map[string]bool{
|
||||
".bun": true,
|
||||
".deno": true,
|
||||
|
||||
// Python
|
||||
// Python.
|
||||
"__pycache__": true,
|
||||
".pytest_cache": true,
|
||||
".mypy_cache": true,
|
||||
@@ -73,7 +73,7 @@ var foldDirs = map[string]bool{
|
||||
".pip": true,
|
||||
".pipx": true,
|
||||
|
||||
// Ruby/Go/PHP (vendor), Java/Kotlin/Scala/Rust (target)
|
||||
// Ruby/Go/PHP (vendor), Java/Kotlin/Scala/Rust (target).
|
||||
"vendor": true,
|
||||
".bundle": true,
|
||||
"gems": true,
|
||||
@@ -88,20 +88,20 @@ var foldDirs = map[string]bool{
|
||||
".composer": true,
|
||||
".cargo": true,
|
||||
|
||||
// Build outputs
|
||||
// Build outputs.
|
||||
"build": true,
|
||||
"dist": true,
|
||||
".output": true,
|
||||
"coverage": true,
|
||||
".coverage": true,
|
||||
|
||||
// IDE
|
||||
// IDE.
|
||||
".idea": true,
|
||||
".vscode": true,
|
||||
".vs": true,
|
||||
".fleet": true,
|
||||
|
||||
// Cache directories
|
||||
// Cache directories.
|
||||
".cache": true,
|
||||
"__MACOSX": true,
|
||||
".DS_Store": true,
|
||||
@@ -121,18 +121,18 @@ var foldDirs = map[string]bool{
|
||||
".sdkman": true,
|
||||
".nvm": true,
|
||||
|
||||
// macOS specific
|
||||
// macOS.
|
||||
"Application Scripts": true,
|
||||
"Saved Application State": true,
|
||||
|
||||
// iCloud
|
||||
// iCloud.
|
||||
"Mobile Documents": true,
|
||||
|
||||
// Docker & Containers
|
||||
// Containers.
|
||||
".docker": true,
|
||||
".containerd": true,
|
||||
|
||||
// Mobile development
|
||||
// Mobile development.
|
||||
"Pods": true,
|
||||
"DerivedData": true,
|
||||
".build": true,
|
||||
@@ -140,18 +140,18 @@ var foldDirs = map[string]bool{
|
||||
"Carthage": true,
|
||||
".dart_tool": true,
|
||||
|
||||
// Web frameworks
|
||||
// Web frameworks.
|
||||
".angular": true,
|
||||
".svelte-kit": true,
|
||||
".astro": true,
|
||||
".solid": true,
|
||||
|
||||
// Databases
|
||||
// Databases.
|
||||
".mysql": true,
|
||||
".postgres": true,
|
||||
"mongodb": true,
|
||||
|
||||
// Other
|
||||
// Other.
|
||||
".terraform": true,
|
||||
".vagrant": true,
|
||||
"tmp": true,
|
||||
@@ -170,22 +170,22 @@ var skipSystemDirs = map[string]bool{
|
||||
"bin": true,
|
||||
"etc": true,
|
||||
"var": true,
|
||||
"opt": false, // User might want to specific check opt
|
||||
"usr": false, // User might check usr
|
||||
"Volumes": true, // Skip external drives by default when scanning root
|
||||
"Network": true, // Skip network mounts
|
||||
"opt": false,
|
||||
"usr": false,
|
||||
"Volumes": true,
|
||||
"Network": true,
|
||||
".vol": true,
|
||||
".Spotlight-V100": true,
|
||||
".fseventsd": true,
|
||||
".DocumentRevisions-V100": true,
|
||||
".TemporaryItems": true,
|
||||
".MobileBackups": true, // Time Machine local snapshots
|
||||
".MobileBackups": true,
|
||||
}
|
||||
|
||||
var defaultSkipDirs = map[string]bool{
|
||||
"nfs": true, // Network File System
|
||||
"PHD": true, // Parallels Shared Folders / Home Directories
|
||||
"Permissions": true, // Common macOS deny folder
|
||||
"nfs": true,
|
||||
"PHD": true,
|
||||
"Permissions": true,
|
||||
}
|
||||
|
||||
var skipExtensions = map[string]bool{
|
||||
|
||||
@@ -23,13 +23,13 @@ func deletePathCmd(path string, counter *int64) tea.Cmd {
|
||||
}
|
||||
}
|
||||
|
||||
// deleteMultiplePathsCmd deletes multiple paths and returns combined results
|
||||
// deleteMultiplePathsCmd deletes paths and aggregates results.
|
||||
func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
||||
return func() tea.Msg {
|
||||
var totalCount int64
|
||||
var errors []string
|
||||
|
||||
// Delete deeper paths first to avoid parent removal triggering child not-exist errors
|
||||
// Delete deeper paths first to avoid parent/child conflicts.
|
||||
pathsToDelete := append([]string(nil), paths...)
|
||||
sort.Slice(pathsToDelete, func(i, j int) bool {
|
||||
return strings.Count(pathsToDelete[i], string(filepath.Separator)) > strings.Count(pathsToDelete[j], string(filepath.Separator))
|
||||
@@ -40,7 +40,7 @@ func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
||||
totalCount += count
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
continue // Parent already removed - not an actionable error
|
||||
continue
|
||||
}
|
||||
errors = append(errors, err.Error())
|
||||
}
|
||||
@@ -51,17 +51,16 @@ func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
||||
resultErr = &multiDeleteError{errors: errors}
|
||||
}
|
||||
|
||||
// Return empty path to trigger full refresh since multiple items were deleted
|
||||
return deleteProgressMsg{
|
||||
done: true,
|
||||
err: resultErr,
|
||||
count: totalCount,
|
||||
path: "", // Empty path signals multiple deletions
|
||||
path: "",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// multiDeleteError holds multiple deletion errors
|
||||
// multiDeleteError holds multiple deletion errors.
|
||||
type multiDeleteError struct {
|
||||
errors []string
|
||||
}
|
||||
@@ -79,14 +78,13 @@ func deletePathWithProgress(root string, counter *int64) (int64, error) {
|
||||
|
||||
err := filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
// Skip permission errors but continue walking
|
||||
// Skip permission errors but continue.
|
||||
if os.IsPermission(err) {
|
||||
if firstErr == nil {
|
||||
firstErr = err
|
||||
}
|
||||
return filepath.SkipDir
|
||||
}
|
||||
// For other errors, record and continue
|
||||
if firstErr == nil {
|
||||
firstErr = err
|
||||
}
|
||||
@@ -100,7 +98,6 @@ func deletePathWithProgress(root string, counter *int64) (int64, error) {
|
||||
atomic.StoreInt64(counter, count)
|
||||
}
|
||||
} else if firstErr == nil {
|
||||
// Record first deletion error
|
||||
firstErr = removeErr
|
||||
}
|
||||
}
|
||||
@@ -108,19 +105,15 @@ func deletePathWithProgress(root string, counter *int64) (int64, error) {
|
||||
return nil
|
||||
})
|
||||
|
||||
// Track walk error separately
|
||||
if err != nil && firstErr == nil {
|
||||
firstErr = err
|
||||
}
|
||||
|
||||
// Try to remove remaining directory structure
|
||||
// Even if this fails, we still report files deleted
|
||||
if removeErr := os.RemoveAll(root); removeErr != nil {
|
||||
if firstErr == nil {
|
||||
firstErr = removeErr
|
||||
}
|
||||
}
|
||||
|
||||
// Always return count (even if there were errors), along with first error
|
||||
return count, firstErr
|
||||
}
|
||||
|
||||
@@ -11,9 +11,7 @@ func TestDeleteMultiplePathsCmdHandlesParentChild(t *testing.T) {
|
||||
parent := filepath.Join(base, "parent")
|
||||
child := filepath.Join(parent, "child")
|
||||
|
||||
// Create structure:
|
||||
// parent/fileA
|
||||
// parent/child/fileC
|
||||
// Structure: parent/fileA, parent/child/fileC.
|
||||
if err := os.MkdirAll(child, 0o755); err != nil {
|
||||
t.Fatalf("mkdir: %v", err)
|
||||
}
|
||||
|
||||
@@ -18,7 +18,7 @@ func displayPath(path string) string {
|
||||
return path
|
||||
}
|
||||
|
||||
// truncateMiddle truncates string in the middle, keeping head and tail.
|
||||
// truncateMiddle trims the middle, keeping head and tail.
|
||||
func truncateMiddle(s string, maxWidth int) string {
|
||||
runes := []rune(s)
|
||||
currentWidth := displayWidth(s)
|
||||
@@ -27,9 +27,7 @@ func truncateMiddle(s string, maxWidth int) string {
|
||||
return s
|
||||
}
|
||||
|
||||
// Reserve 3 width for "..."
|
||||
if maxWidth < 10 {
|
||||
// Simple truncation for very small width
|
||||
width := 0
|
||||
for i, r := range runes {
|
||||
width += runeWidth(r)
|
||||
@@ -40,11 +38,9 @@ func truncateMiddle(s string, maxWidth int) string {
|
||||
return s
|
||||
}
|
||||
|
||||
// Keep more of the tail (filename usually more important)
|
||||
targetHeadWidth := (maxWidth - 3) / 3
|
||||
targetTailWidth := maxWidth - 3 - targetHeadWidth
|
||||
|
||||
// Find head cutoff point based on display width
|
||||
headWidth := 0
|
||||
headIdx := 0
|
||||
for i, r := range runes {
|
||||
@@ -56,7 +52,6 @@ func truncateMiddle(s string, maxWidth int) string {
|
||||
headIdx = i + 1
|
||||
}
|
||||
|
||||
// Find tail cutoff point
|
||||
tailWidth := 0
|
||||
tailIdx := len(runes)
|
||||
for i := len(runes) - 1; i >= 0; i-- {
|
||||
@@ -108,7 +103,6 @@ func coloredProgressBar(value, max int64, percent float64) string {
|
||||
filled = barWidth
|
||||
}
|
||||
|
||||
// Choose color based on percentage
|
||||
var barColor string
|
||||
if percent >= 50 {
|
||||
barColor = colorRed
|
||||
@@ -142,7 +136,7 @@ func coloredProgressBar(value, max int64, percent float64) string {
|
||||
return bar + colorReset
|
||||
}
|
||||
|
||||
// Calculate display width considering CJK characters and Emoji.
|
||||
// runeWidth returns display width for wide characters and emoji.
|
||||
func runeWidth(r rune) int {
|
||||
if r >= 0x4E00 && r <= 0x9FFF || // CJK Unified Ideographs
|
||||
r >= 0x3400 && r <= 0x4DBF || // CJK Extension A
|
||||
@@ -173,18 +167,16 @@ func displayWidth(s string) int {
|
||||
return width
|
||||
}
|
||||
|
||||
// calculateNameWidth computes the optimal name column width based on terminal width.
|
||||
// Fixed elements: prefix(3) + num(3) + bar(24) + percent(7) + sep(5) + icon(3) + size(12) + hint(4) = 61
|
||||
// calculateNameWidth computes name column width from terminal width.
|
||||
func calculateNameWidth(termWidth int) int {
|
||||
const fixedWidth = 61
|
||||
available := termWidth - fixedWidth
|
||||
|
||||
// Constrain to reasonable bounds
|
||||
if available < 24 {
|
||||
return 24 // Minimum for readability
|
||||
return 24
|
||||
}
|
||||
if available > 60 {
|
||||
return 60 // Maximum to avoid overly wide columns
|
||||
return 60
|
||||
}
|
||||
return available
|
||||
}
|
||||
@@ -233,7 +225,7 @@ func padName(name string, targetWidth int) string {
|
||||
return name + strings.Repeat(" ", targetWidth-currentWidth)
|
||||
}
|
||||
|
||||
// formatUnusedTime formats the time since last access in a compact way.
|
||||
// formatUnusedTime formats time since last access.
|
||||
func formatUnusedTime(lastAccess time.Time) string {
|
||||
if lastAccess.IsZero() {
|
||||
return ""
|
||||
|
||||
@@ -168,7 +168,6 @@ func TestTruncateMiddle(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestDisplayPath(t *testing.T) {
|
||||
// This test assumes HOME is set
|
||||
tests := []struct {
|
||||
name string
|
||||
setup func() string
|
||||
|
||||
@@ -1,15 +1,10 @@
|
||||
package main
|
||||
|
||||
// entryHeap implements heap.Interface for a min-heap of dirEntry (sorted by Size)
|
||||
// Since we want Top N Largest, we use a Min Heap of size N.
|
||||
// When adding a new item:
|
||||
// 1. If heap size < N: push
|
||||
// 2. If heap size == N and item > min (root): pop min, push item
|
||||
// The heap will thus maintain the largest N items.
|
||||
// entryHeap is a min-heap of dirEntry used to keep Top N largest entries.
|
||||
type entryHeap []dirEntry
|
||||
|
||||
func (h entryHeap) Len() int { return len(h) }
|
||||
func (h entryHeap) Less(i, j int) bool { return h[i].Size < h[j].Size } // Min-heap based on Size
|
||||
func (h entryHeap) Less(i, j int) bool { return h[i].Size < h[j].Size }
|
||||
func (h entryHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
|
||||
|
||||
func (h *entryHeap) Push(x interface{}) {
|
||||
@@ -24,7 +19,7 @@ func (h *entryHeap) Pop() interface{} {
|
||||
return x
|
||||
}
|
||||
|
||||
// largeFileHeap implements heap.Interface for fileEntry
|
||||
// largeFileHeap is a min-heap for fileEntry.
|
||||
type largeFileHeap []fileEntry
|
||||
|
||||
func (h largeFileHeap) Len() int { return len(h) }
|
||||
|
||||
@@ -130,7 +130,6 @@ func main() {
|
||||
var isOverview bool
|
||||
|
||||
if target == "" {
|
||||
// Default to overview mode
|
||||
isOverview = true
|
||||
abs = "/"
|
||||
} else {
|
||||
@@ -143,8 +142,7 @@ func main() {
|
||||
isOverview = false
|
||||
}
|
||||
|
||||
// Prefetch overview cache in background (non-blocking)
|
||||
// Use context with timeout to prevent hanging
|
||||
// Warm overview cache in background.
|
||||
prefetchCtx, prefetchCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer prefetchCancel()
|
||||
go prefetchOverviewCache(prefetchCtx)
|
||||
@@ -184,7 +182,6 @@ func newModel(path string, isOverview bool) model {
|
||||
largeMultiSelected: make(map[string]bool),
|
||||
}
|
||||
|
||||
// In overview mode, create shortcut entries
|
||||
if isOverview {
|
||||
m.scanning = false
|
||||
m.hydrateOverviewEntries()
|
||||
@@ -205,12 +202,10 @@ func createOverviewEntries() []dirEntry {
|
||||
home := os.Getenv("HOME")
|
||||
entries := []dirEntry{}
|
||||
|
||||
// Separate Home and ~/Library for better visibility and performance
|
||||
// Home excludes Library to avoid duplicate scanning
|
||||
// Separate Home and ~/Library to avoid double counting.
|
||||
if home != "" {
|
||||
entries = append(entries, dirEntry{Name: "Home", Path: home, IsDir: true, Size: -1})
|
||||
|
||||
// Add ~/Library separately so users can see app data usage
|
||||
userLibrary := filepath.Join(home, "Library")
|
||||
if _, err := os.Stat(userLibrary); err == nil {
|
||||
entries = append(entries, dirEntry{Name: "App Library", Path: userLibrary, IsDir: true, Size: -1})
|
||||
@@ -222,7 +217,7 @@ func createOverviewEntries() []dirEntry {
|
||||
dirEntry{Name: "System Library", Path: "/Library", IsDir: true, Size: -1},
|
||||
)
|
||||
|
||||
// Add Volumes shortcut only when it contains real mounted folders (e.g., external disks)
|
||||
// Include Volumes only when real mounts exist.
|
||||
if hasUsefulVolumeMounts("/Volumes") {
|
||||
entries = append(entries, dirEntry{Name: "Volumes", Path: "/Volumes", IsDir: true, Size: -1})
|
||||
}
|
||||
@@ -238,7 +233,6 @@ func hasUsefulVolumeMounts(path string) bool {
|
||||
|
||||
for _, entry := range entries {
|
||||
name := entry.Name()
|
||||
// Skip hidden control entries for Spotlight/TimeMachine etc.
|
||||
if strings.HasPrefix(name, ".") {
|
||||
continue
|
||||
}
|
||||
@@ -276,8 +270,7 @@ func (m *model) hydrateOverviewEntries() {
|
||||
}
|
||||
|
||||
func (m *model) sortOverviewEntriesBySize() {
|
||||
// Sort entries by size (largest first)
|
||||
// Use stable sort to maintain order when sizes are equal
|
||||
// Stable sort by size.
|
||||
sort.SliceStable(m.entries, func(i, j int) bool {
|
||||
return m.entries[i].Size > m.entries[j].Size
|
||||
})
|
||||
@@ -288,7 +281,6 @@ func (m *model) scheduleOverviewScans() tea.Cmd {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Find pending entries (not scanned and not currently scanning)
|
||||
var pendingIndices []int
|
||||
for i, entry := range m.entries {
|
||||
if entry.Size < 0 && !m.overviewScanningSet[entry.Path] {
|
||||
@@ -299,18 +291,15 @@ func (m *model) scheduleOverviewScans() tea.Cmd {
|
||||
}
|
||||
}
|
||||
|
||||
// No more work to do
|
||||
if len(pendingIndices) == 0 {
|
||||
m.overviewScanning = false
|
||||
if !hasPendingOverviewEntries(m.entries) {
|
||||
// All scans complete - sort entries by size (largest first)
|
||||
m.sortOverviewEntriesBySize()
|
||||
m.status = "Ready"
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Mark all as scanning
|
||||
var cmds []tea.Cmd
|
||||
for _, idx := range pendingIndices {
|
||||
entry := m.entries[idx]
|
||||
@@ -361,7 +350,6 @@ func (m model) Init() tea.Cmd {
|
||||
|
||||
func (m model) scanCmd(path string) tea.Cmd {
|
||||
return func() tea.Msg {
|
||||
// Try to load from persistent cache first
|
||||
if cached, err := loadCacheFromDisk(path); err == nil {
|
||||
result := scanResult{
|
||||
Entries: cached.Entries,
|
||||
@@ -371,8 +359,6 @@ func (m model) scanCmd(path string) tea.Cmd {
|
||||
return scanResultMsg{result: result, err: nil}
|
||||
}
|
||||
|
||||
// Use singleflight to avoid duplicate scans of the same path
|
||||
// If multiple goroutines request the same path, only one scan will be performed
|
||||
v, err, _ := scanGroup.Do(path, func() (interface{}, error) {
|
||||
return scanPathConcurrent(path, m.filesScanned, m.dirsScanned, m.bytesScanned, m.currentPath)
|
||||
})
|
||||
@@ -383,10 +369,8 @@ func (m model) scanCmd(path string) tea.Cmd {
|
||||
|
||||
result := v.(scanResult)
|
||||
|
||||
// Save to persistent cache asynchronously with error logging
|
||||
go func(p string, r scanResult) {
|
||||
if err := saveCacheToDisk(p, r); err != nil {
|
||||
// Log error but don't fail the scan
|
||||
_ = err // Cache save failure is not critical
|
||||
}
|
||||
}(path, result)
|
||||
@@ -412,7 +396,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
case deleteProgressMsg:
|
||||
if msg.done {
|
||||
m.deleting = false
|
||||
// Clear multi-selection after delete
|
||||
m.multiSelected = make(map[string]bool)
|
||||
m.largeMultiSelected = make(map[string]bool)
|
||||
if msg.err != nil {
|
||||
@@ -424,7 +407,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
invalidateCache(m.path)
|
||||
m.status = fmt.Sprintf("Deleted %d items", msg.count)
|
||||
// Mark all caches as dirty
|
||||
for i := range m.history {
|
||||
m.history[i].Dirty = true
|
||||
}
|
||||
@@ -433,9 +415,7 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
entry.Dirty = true
|
||||
m.cache[path] = entry
|
||||
}
|
||||
// Refresh the view
|
||||
m.scanning = true
|
||||
// Reset scan counters for rescan
|
||||
atomic.StoreInt64(m.filesScanned, 0)
|
||||
atomic.StoreInt64(m.dirsScanned, 0)
|
||||
atomic.StoreInt64(m.bytesScanned, 0)
|
||||
@@ -452,7 +432,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
m.status = fmt.Sprintf("Scan failed: %v", msg.err)
|
||||
return m, nil
|
||||
}
|
||||
// Filter out 0-byte items for cleaner view
|
||||
filteredEntries := make([]dirEntry, 0, len(msg.result.Entries))
|
||||
for _, e := range msg.result.Entries {
|
||||
if e.Size > 0 {
|
||||
@@ -477,7 +456,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
return m, nil
|
||||
case overviewSizeMsg:
|
||||
// Remove from scanning set
|
||||
delete(m.overviewScanningSet, msg.Path)
|
||||
|
||||
if msg.Err == nil {
|
||||
@@ -488,7 +466,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
|
||||
if m.inOverviewMode() {
|
||||
// Update entry with result
|
||||
for i := range m.entries {
|
||||
if m.entries[i].Path == msg.Path {
|
||||
if msg.Err == nil {
|
||||
@@ -501,18 +478,15 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
m.totalSize = sumKnownEntrySizes(m.entries)
|
||||
|
||||
// Show error briefly if any
|
||||
if msg.Err != nil {
|
||||
m.status = fmt.Sprintf("Unable to measure %s: %v", displayPath(msg.Path), msg.Err)
|
||||
}
|
||||
|
||||
// Schedule next batch of scans
|
||||
cmd := m.scheduleOverviewScans()
|
||||
return m, cmd
|
||||
}
|
||||
return m, nil
|
||||
case tickMsg:
|
||||
// Keep spinner running if scanning or deleting or if there are pending overview items
|
||||
hasPending := false
|
||||
if m.inOverviewMode() {
|
||||
for _, entry := range m.entries {
|
||||
@@ -524,7 +498,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
if m.scanning || m.deleting || (m.inOverviewMode() && (m.overviewScanning || hasPending)) {
|
||||
m.spinner = (m.spinner + 1) % len(spinnerFrames)
|
||||
// Update delete progress status
|
||||
if m.deleting && m.deleteCount != nil {
|
||||
count := atomic.LoadInt64(m.deleteCount)
|
||||
if count > 0 {
|
||||
@@ -540,18 +513,16 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
|
||||
func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
// Handle delete confirmation
|
||||
// Delete confirm flow.
|
||||
if m.deleteConfirm {
|
||||
switch msg.String() {
|
||||
case "delete", "backspace":
|
||||
// Confirm delete - start async deletion
|
||||
m.deleteConfirm = false
|
||||
m.deleting = true
|
||||
var deleteCount int64
|
||||
m.deleteCount = &deleteCount
|
||||
|
||||
// Collect paths to delete (multi-select or single)
|
||||
// Using paths instead of indices is safer - avoids deleting wrong files if list changes
|
||||
// Collect paths (safer than indices).
|
||||
var pathsToDelete []string
|
||||
if m.showLargeFiles {
|
||||
if len(m.largeMultiSelected) > 0 {
|
||||
@@ -587,13 +558,11 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
m.status = fmt.Sprintf("Deleting %d items...", len(pathsToDelete))
|
||||
return m, tea.Batch(deleteMultiplePathsCmd(pathsToDelete, m.deleteCount), tickCmd())
|
||||
case "esc", "q":
|
||||
// Cancel delete with ESC or Q
|
||||
m.status = "Cancelled"
|
||||
m.deleteConfirm = false
|
||||
m.deleteTarget = nil
|
||||
return m, nil
|
||||
default:
|
||||
// Ignore other keys - keep showing confirmation
|
||||
return m, nil
|
||||
}
|
||||
}
|
||||
@@ -648,7 +617,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
return m, nil
|
||||
}
|
||||
if len(m.history) == 0 {
|
||||
// Return to overview if at top level
|
||||
if !m.inOverviewMode() {
|
||||
return m, m.switchToOverviewMode()
|
||||
}
|
||||
@@ -663,7 +631,7 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
m.largeOffset = last.LargeOffset
|
||||
m.isOverview = last.IsOverview
|
||||
if last.Dirty {
|
||||
// If returning to overview mode, refresh overview entries instead of scanning
|
||||
// On overview return, refresh cached entries.
|
||||
if last.IsOverview {
|
||||
m.hydrateOverviewEntries()
|
||||
m.totalSize = sumKnownEntrySizes(m.entries)
|
||||
@@ -696,17 +664,14 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
m.scanning = false
|
||||
return m, nil
|
||||
case "r":
|
||||
// Clear multi-selection on refresh
|
||||
m.multiSelected = make(map[string]bool)
|
||||
m.largeMultiSelected = make(map[string]bool)
|
||||
|
||||
if m.inOverviewMode() {
|
||||
// In overview mode, clear cache and re-scan known entries
|
||||
m.overviewSizeCache = make(map[string]int64)
|
||||
m.overviewScanningSet = make(map[string]bool)
|
||||
m.hydrateOverviewEntries() // Reset sizes to pending
|
||||
|
||||
// Reset all entries to pending state for visual feedback
|
||||
for i := range m.entries {
|
||||
m.entries[i].Size = -1
|
||||
}
|
||||
@@ -717,11 +682,9 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
return m, tea.Batch(m.scheduleOverviewScans(), tickCmd())
|
||||
}
|
||||
|
||||
// Normal mode: Invalidate cache before rescanning
|
||||
invalidateCache(m.path)
|
||||
m.status = "Refreshing..."
|
||||
m.scanning = true
|
||||
// Reset scan counters for refresh
|
||||
atomic.StoreInt64(m.filesScanned, 0)
|
||||
atomic.StoreInt64(m.dirsScanned, 0)
|
||||
atomic.StoreInt64(m.bytesScanned, 0)
|
||||
@@ -730,7 +693,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
return m, tea.Batch(m.scanCmd(m.path), tickCmd())
|
||||
case "t", "T":
|
||||
// Don't allow switching to large files view in overview mode
|
||||
if !m.inOverviewMode() {
|
||||
m.showLargeFiles = !m.showLargeFiles
|
||||
if m.showLargeFiles {
|
||||
@@ -740,16 +702,13 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
} else {
|
||||
m.multiSelected = make(map[string]bool)
|
||||
}
|
||||
// Reset status when switching views
|
||||
m.status = fmt.Sprintf("Scanned %s", humanizeBytes(m.totalSize))
|
||||
}
|
||||
case "o":
|
||||
// Open selected entries (multi-select aware)
|
||||
// Limit batch operations to prevent system resource exhaustion
|
||||
// Open selected entries (multi-select aware).
|
||||
const maxBatchOpen = 20
|
||||
if m.showLargeFiles {
|
||||
if len(m.largeFiles) > 0 {
|
||||
// Check for multi-selection first
|
||||
if len(m.largeMultiSelected) > 0 {
|
||||
count := len(m.largeMultiSelected)
|
||||
if count > maxBatchOpen {
|
||||
@@ -775,7 +734,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
}
|
||||
} else if len(m.entries) > 0 {
|
||||
// Check for multi-selection first
|
||||
if len(m.multiSelected) > 0 {
|
||||
count := len(m.multiSelected)
|
||||
if count > maxBatchOpen {
|
||||
@@ -801,12 +759,10 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
}
|
||||
case "f", "F":
|
||||
// Reveal selected entries in Finder (multi-select aware)
|
||||
// Limit batch operations to prevent system resource exhaustion
|
||||
// Reveal in Finder (multi-select aware).
|
||||
const maxBatchReveal = 20
|
||||
if m.showLargeFiles {
|
||||
if len(m.largeFiles) > 0 {
|
||||
// Check for multi-selection first
|
||||
if len(m.largeMultiSelected) > 0 {
|
||||
count := len(m.largeMultiSelected)
|
||||
if count > maxBatchReveal {
|
||||
@@ -832,7 +788,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
}
|
||||
} else if len(m.entries) > 0 {
|
||||
// Check for multi-selection first
|
||||
if len(m.multiSelected) > 0 {
|
||||
count := len(m.multiSelected)
|
||||
if count > maxBatchReveal {
|
||||
@@ -858,8 +813,7 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
}
|
||||
case " ":
|
||||
// Toggle multi-select with spacebar
|
||||
// Using paths as keys (instead of indices) is safer and more maintainable
|
||||
// Toggle multi-select (paths as keys).
|
||||
if m.showLargeFiles {
|
||||
if len(m.largeFiles) > 0 && m.largeSelected < len(m.largeFiles) {
|
||||
if m.largeMultiSelected == nil {
|
||||
@@ -871,11 +825,9 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
} else {
|
||||
m.largeMultiSelected[selectedPath] = true
|
||||
}
|
||||
// Update status to show selection count and total size
|
||||
count := len(m.largeMultiSelected)
|
||||
if count > 0 {
|
||||
var totalSize int64
|
||||
// Calculate total size by looking up each selected path
|
||||
for path := range m.largeMultiSelected {
|
||||
for _, file := range m.largeFiles {
|
||||
if file.Path == path {
|
||||
@@ -899,11 +851,9 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
} else {
|
||||
m.multiSelected[selectedPath] = true
|
||||
}
|
||||
// Update status to show selection count and total size
|
||||
count := len(m.multiSelected)
|
||||
if count > 0 {
|
||||
var totalSize int64
|
||||
// Calculate total size by looking up each selected path
|
||||
for path := range m.multiSelected {
|
||||
for _, entry := range m.entries {
|
||||
if entry.Path == path {
|
||||
@@ -918,15 +868,11 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
}
|
||||
case "delete", "backspace":
|
||||
// Delete selected file(s) or directory(ies)
|
||||
if m.showLargeFiles {
|
||||
if len(m.largeFiles) > 0 {
|
||||
// Check for multi-selection first
|
||||
if len(m.largeMultiSelected) > 0 {
|
||||
m.deleteConfirm = true
|
||||
// Set deleteTarget to first selected for display purposes
|
||||
for path := range m.largeMultiSelected {
|
||||
// Find the file entry by path
|
||||
for _, file := range m.largeFiles {
|
||||
if file.Path == path {
|
||||
m.deleteTarget = &dirEntry{
|
||||
@@ -952,12 +898,10 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||
}
|
||||
}
|
||||
} else if len(m.entries) > 0 && !m.inOverviewMode() {
|
||||
// Check for multi-selection first
|
||||
if len(m.multiSelected) > 0 {
|
||||
m.deleteConfirm = true
|
||||
// Set deleteTarget to first selected for display purposes
|
||||
for path := range m.multiSelected {
|
||||
// Find the entry by path
|
||||
// Resolve entry by path.
|
||||
for i := range m.entries {
|
||||
if m.entries[i].Path == path {
|
||||
m.deleteTarget = &m.entries[i]
|
||||
@@ -994,7 +938,6 @@ func (m *model) switchToOverviewMode() tea.Cmd {
|
||||
m.status = "Ready"
|
||||
return nil
|
||||
}
|
||||
// Start tick to animate spinner while scanning
|
||||
return tea.Batch(cmd, tickCmd())
|
||||
}
|
||||
|
||||
@@ -1004,7 +947,6 @@ func (m model) enterSelectedDir() (tea.Model, tea.Cmd) {
|
||||
}
|
||||
selected := m.entries[m.selected]
|
||||
if selected.IsDir {
|
||||
// Always save current state to history (including overview mode)
|
||||
m.history = append(m.history, snapshotFromModel(m))
|
||||
m.path = selected.Path
|
||||
m.selected = 0
|
||||
@@ -1012,11 +954,9 @@ func (m model) enterSelectedDir() (tea.Model, tea.Cmd) {
|
||||
m.status = "Scanning..."
|
||||
m.scanning = true
|
||||
m.isOverview = false
|
||||
// Clear multi-selection when entering new directory
|
||||
m.multiSelected = make(map[string]bool)
|
||||
m.largeMultiSelected = make(map[string]bool)
|
||||
|
||||
// Reset scan counters for new scan
|
||||
atomic.StoreInt64(m.filesScanned, 0)
|
||||
atomic.StoreInt64(m.dirsScanned, 0)
|
||||
atomic.StoreInt64(m.bytesScanned, 0)
|
||||
|
||||
@@ -31,16 +31,14 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
|
||||
var total int64
|
||||
|
||||
// Use heaps to track Top N items, drastically reducing memory usage
|
||||
// for directories with millions of files
|
||||
// Keep Top N heaps.
|
||||
entriesHeap := &entryHeap{}
|
||||
heap.Init(entriesHeap)
|
||||
|
||||
largeFilesHeap := &largeFileHeap{}
|
||||
heap.Init(largeFilesHeap)
|
||||
|
||||
// Use worker pool for concurrent directory scanning
|
||||
// For I/O-bound operations, use more workers than CPU count
|
||||
// Worker pool sized for I/O-bound scanning.
|
||||
numWorkers := runtime.NumCPU() * cpuMultiplier
|
||||
if numWorkers < minWorkers {
|
||||
numWorkers = minWorkers
|
||||
@@ -57,17 +55,15 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
sem := make(chan struct{}, numWorkers)
|
||||
var wg sync.WaitGroup
|
||||
|
||||
// Use channels to collect results without lock contention
|
||||
// Collect results via channels.
|
||||
entryChan := make(chan dirEntry, len(children))
|
||||
largeFileChan := make(chan fileEntry, maxLargeFiles*2)
|
||||
|
||||
// Start goroutines to collect from channels into heaps
|
||||
var collectorWg sync.WaitGroup
|
||||
collectorWg.Add(2)
|
||||
go func() {
|
||||
defer collectorWg.Done()
|
||||
for entry := range entryChan {
|
||||
// Maintain Top N Heap for entries
|
||||
if entriesHeap.Len() < maxEntries {
|
||||
heap.Push(entriesHeap, entry)
|
||||
} else if entry.Size > (*entriesHeap)[0].Size {
|
||||
@@ -79,7 +75,6 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
go func() {
|
||||
defer collectorWg.Done()
|
||||
for file := range largeFileChan {
|
||||
// Maintain Top N Heap for large files
|
||||
if largeFilesHeap.Len() < maxLargeFiles {
|
||||
heap.Push(largeFilesHeap, file)
|
||||
} else if file.Size > (*largeFilesHeap)[0].Size {
|
||||
@@ -96,20 +91,15 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
for _, child := range children {
|
||||
fullPath := filepath.Join(root, child.Name())
|
||||
|
||||
// Skip symlinks to avoid following them into unexpected locations
|
||||
// Use Type() instead of IsDir() to check without following symlinks
|
||||
// Skip symlinks to avoid following unexpected targets.
|
||||
if child.Type()&fs.ModeSymlink != 0 {
|
||||
// For symlinks, check if they point to a directory
|
||||
targetInfo, err := os.Stat(fullPath)
|
||||
isDir := false
|
||||
if err == nil && targetInfo.IsDir() {
|
||||
isDir = true
|
||||
}
|
||||
|
||||
// Get symlink size (we don't effectively count the target size towards parent to avoid double counting,
|
||||
// or we just count the link size itself. Existing logic counts 'size' via getActualFileSize on the link info).
|
||||
// Ideally we just want navigation.
|
||||
// Re-fetching info for link itself if needed, but child.Info() does that.
|
||||
// Count link size only to avoid double-counting targets.
|
||||
info, err := child.Info()
|
||||
if err != nil {
|
||||
continue
|
||||
@@ -118,28 +108,26 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
atomic.AddInt64(&total, size)
|
||||
|
||||
entryChan <- dirEntry{
|
||||
Name: child.Name() + " →", // Add arrow to indicate symlink
|
||||
Name: child.Name() + " →",
|
||||
Path: fullPath,
|
||||
Size: size,
|
||||
IsDir: isDir, // Allow navigation if target is directory
|
||||
IsDir: isDir,
|
||||
LastAccess: getLastAccessTimeFromInfo(info),
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if child.IsDir() {
|
||||
// Check if directory should be skipped based on user configuration
|
||||
if defaultSkipDirs[child.Name()] {
|
||||
continue
|
||||
}
|
||||
|
||||
// In root directory, skip system directories completely
|
||||
// Skip system dirs at root.
|
||||
if isRootDir && skipSystemDirs[child.Name()] {
|
||||
continue
|
||||
}
|
||||
|
||||
// Special handling for ~/Library - reuse cache to avoid duplicate scanning
|
||||
// This is scanned separately in overview mode
|
||||
// ~/Library is scanned separately; reuse cache when possible.
|
||||
if isHomeDir && child.Name() == "Library" {
|
||||
wg.Add(1)
|
||||
go func(name, path string) {
|
||||
@@ -148,14 +136,11 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
defer func() { <-sem }()
|
||||
|
||||
var size int64
|
||||
// Try overview cache first (from overview scan)
|
||||
if cached, err := loadStoredOverviewSize(path); err == nil && cached > 0 {
|
||||
size = cached
|
||||
} else if cached, err := loadCacheFromDisk(path); err == nil {
|
||||
// Try disk cache
|
||||
size = cached.TotalSize
|
||||
} else {
|
||||
// No cache available, scan normally
|
||||
size = calculateDirSizeConcurrent(path, largeFileChan, filesScanned, dirsScanned, bytesScanned, currentPath)
|
||||
}
|
||||
atomic.AddInt64(&total, size)
|
||||
@@ -172,7 +157,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
continue
|
||||
}
|
||||
|
||||
// For folded directories, calculate size quickly without expanding
|
||||
// Folded dirs: fast size without expanding.
|
||||
if shouldFoldDirWithPath(child.Name(), fullPath) {
|
||||
wg.Add(1)
|
||||
go func(name, path string) {
|
||||
@@ -180,10 +165,8 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
sem <- struct{}{}
|
||||
defer func() { <-sem }()
|
||||
|
||||
// Try du command first for folded dirs (much faster)
|
||||
size, err := getDirectorySizeFromDu(path)
|
||||
if err != nil || size <= 0 {
|
||||
// Fallback to concurrent walk if du fails
|
||||
size = calculateDirSizeFast(path, filesScanned, dirsScanned, bytesScanned, currentPath)
|
||||
}
|
||||
atomic.AddInt64(&total, size)
|
||||
@@ -194,13 +177,12 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
Path: path,
|
||||
Size: size,
|
||||
IsDir: true,
|
||||
LastAccess: time.Time{}, // Lazy load when displayed
|
||||
LastAccess: time.Time{},
|
||||
}
|
||||
}(child.Name(), fullPath)
|
||||
continue
|
||||
}
|
||||
|
||||
// Normal directory: full scan with detail
|
||||
wg.Add(1)
|
||||
go func(name, path string) {
|
||||
defer wg.Done()
|
||||
@@ -216,7 +198,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
Path: path,
|
||||
Size: size,
|
||||
IsDir: true,
|
||||
LastAccess: time.Time{}, // Lazy load when displayed
|
||||
LastAccess: time.Time{},
|
||||
}
|
||||
}(child.Name(), fullPath)
|
||||
continue
|
||||
@@ -226,7 +208,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
// Get actual disk usage for sparse files and cloud files
|
||||
// Actual disk usage for sparse/cloud files.
|
||||
size := getActualFileSize(fullPath, info)
|
||||
atomic.AddInt64(&total, size)
|
||||
atomic.AddInt64(filesScanned, 1)
|
||||
@@ -239,7 +221,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
IsDir: false,
|
||||
LastAccess: getLastAccessTimeFromInfo(info),
|
||||
}
|
||||
// Only track large files that are not code/text files
|
||||
// Track large files only.
|
||||
if !shouldSkipFileForLargeTracking(fullPath) && size >= minLargeFileSize {
|
||||
largeFileChan <- fileEntry{Name: child.Name(), Path: fullPath, Size: size}
|
||||
}
|
||||
@@ -247,12 +229,12 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Close channels and wait for collectors to finish
|
||||
// Close channels and wait for collectors.
|
||||
close(entryChan)
|
||||
close(largeFileChan)
|
||||
collectorWg.Wait()
|
||||
|
||||
// Convert Heaps to sorted slices (Descending order)
|
||||
// Convert heaps to sorted slices (descending).
|
||||
entries := make([]dirEntry, entriesHeap.Len())
|
||||
for i := len(entries) - 1; i >= 0; i-- {
|
||||
entries[i] = heap.Pop(entriesHeap).(dirEntry)
|
||||
@@ -263,20 +245,11 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
largeFiles[i] = heap.Pop(largeFilesHeap).(fileEntry)
|
||||
}
|
||||
|
||||
// Try to use Spotlight (mdfind) for faster large file discovery
|
||||
// This is a performance optimization that gracefully falls back to scan results
|
||||
// if Spotlight is unavailable or fails. The fallback is intentionally silent
|
||||
// because users only care about correct results, not the method used.
|
||||
// Use Spotlight for large files when available.
|
||||
if spotlightFiles := findLargeFilesWithSpotlight(root, minLargeFileSize); len(spotlightFiles) > 0 {
|
||||
// Spotlight results are already sorted top N
|
||||
// Use them in place of scanned large files
|
||||
largeFiles = spotlightFiles
|
||||
}
|
||||
|
||||
// Double check sorting consistency (Spotlight returns sorted, but heap pop handles scan results)
|
||||
// If needed, we could re-sort largeFiles, but heap pop ensures ascending, and we filled reverse, so it's Descending.
|
||||
// Spotlight returns Descending. So no extra sort needed for either.
|
||||
|
||||
return scanResult{
|
||||
Entries: entries,
|
||||
LargeFiles: largeFiles,
|
||||
@@ -285,21 +258,16 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
||||
}
|
||||
|
||||
func shouldFoldDirWithPath(name, path string) bool {
|
||||
// Check basic fold list first
|
||||
if foldDirs[name] {
|
||||
return true
|
||||
}
|
||||
|
||||
// Special case: npm cache directories - fold all subdirectories
|
||||
// This includes: .npm/_quick/*, .npm/_cacache/*, .npm/a-z/*, .tnpm/*
|
||||
// Handle npm cache structure.
|
||||
if strings.Contains(path, "/.npm/") || strings.Contains(path, "/.tnpm/") {
|
||||
// Get the parent directory name
|
||||
parent := filepath.Base(filepath.Dir(path))
|
||||
// If parent is a cache folder (_quick, _cacache, etc) or npm dir itself, fold it
|
||||
if parent == ".npm" || parent == ".tnpm" || strings.HasPrefix(parent, "_") {
|
||||
return true
|
||||
}
|
||||
// Also fold single-letter subdirectories (npm cache structure like .npm/a/, .npm/b/)
|
||||
if len(name) == 1 {
|
||||
return true
|
||||
}
|
||||
@@ -313,17 +281,14 @@ func shouldSkipFileForLargeTracking(path string) bool {
|
||||
return skipExtensions[ext]
|
||||
}
|
||||
|
||||
// calculateDirSizeFast performs concurrent directory size calculation using os.ReadDir
|
||||
// This is a faster fallback than filepath.WalkDir when du fails
|
||||
// calculateDirSizeFast performs concurrent dir sizing using os.ReadDir.
|
||||
func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *int64, currentPath *string) int64 {
|
||||
var total int64
|
||||
var wg sync.WaitGroup
|
||||
|
||||
// Create context with timeout
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
// Limit total concurrency for this walk
|
||||
concurrency := runtime.NumCPU() * 4
|
||||
if concurrency > 64 {
|
||||
concurrency = 64
|
||||
@@ -351,19 +316,16 @@ func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *
|
||||
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
// Directories: recurse concurrently
|
||||
wg.Add(1)
|
||||
// Capture loop variable
|
||||
subDir := filepath.Join(dirPath, entry.Name())
|
||||
go func(p string) {
|
||||
defer wg.Done()
|
||||
sem <- struct{}{} // Acquire token
|
||||
defer func() { <-sem }() // Release token
|
||||
sem <- struct{}{}
|
||||
defer func() { <-sem }()
|
||||
walk(p)
|
||||
}(subDir)
|
||||
atomic.AddInt64(dirsScanned, 1)
|
||||
} else {
|
||||
// Files: process immediately
|
||||
info, err := entry.Info()
|
||||
if err == nil {
|
||||
size := getActualFileSize(filepath.Join(dirPath, entry.Name()), info)
|
||||
@@ -388,9 +350,8 @@ func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *
|
||||
return total
|
||||
}
|
||||
|
||||
// Use Spotlight (mdfind) to quickly find large files in a directory
|
||||
// Use Spotlight (mdfind) to quickly find large files.
|
||||
func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
||||
// mdfind query: files >= minSize in the specified directory
|
||||
query := fmt.Sprintf("kMDItemFSSize >= %d", minSize)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), mdlsTimeout)
|
||||
@@ -399,7 +360,6 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
||||
cmd := exec.CommandContext(ctx, "mdfind", "-onlyin", root, query)
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
// Fallback: mdfind not available or failed
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -411,28 +371,26 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
||||
continue
|
||||
}
|
||||
|
||||
// Filter out code files first (cheapest check, no I/O)
|
||||
// Filter code files first (cheap).
|
||||
if shouldSkipFileForLargeTracking(line) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Filter out files in folded directories (cheap string check)
|
||||
// Filter folded directories (cheap string check).
|
||||
if isInFoldedDir(line) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Use Lstat instead of Stat (faster, doesn't follow symlinks)
|
||||
info, err := os.Lstat(line)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip if it's a directory or symlink
|
||||
if info.IsDir() || info.Mode()&os.ModeSymlink != 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get actual disk usage for sparse files and cloud files
|
||||
// Actual disk usage for sparse/cloud files.
|
||||
actualSize := getActualFileSize(line, info)
|
||||
files = append(files, fileEntry{
|
||||
Name: filepath.Base(line),
|
||||
@@ -441,12 +399,11 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
||||
})
|
||||
}
|
||||
|
||||
// Sort by size (descending)
|
||||
// Sort by size (descending).
|
||||
sort.Slice(files, func(i, j int) bool {
|
||||
return files[i].Size > files[j].Size
|
||||
})
|
||||
|
||||
// Return top N
|
||||
if len(files) > maxLargeFiles {
|
||||
files = files[:maxLargeFiles]
|
||||
}
|
||||
@@ -454,9 +411,8 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
||||
return files
|
||||
}
|
||||
|
||||
// isInFoldedDir checks if a path is inside a folded directory (optimized)
|
||||
// isInFoldedDir checks if a path is inside a folded directory.
|
||||
func isInFoldedDir(path string) bool {
|
||||
// Split path into components for faster checking
|
||||
parts := strings.Split(path, string(os.PathSeparator))
|
||||
for _, part := range parts {
|
||||
if foldDirs[part] {
|
||||
@@ -467,7 +423,6 @@ func isInFoldedDir(path string) bool {
|
||||
}
|
||||
|
||||
func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, filesScanned, dirsScanned, bytesScanned *int64, currentPath *string) int64 {
|
||||
// Read immediate children
|
||||
children, err := os.ReadDir(root)
|
||||
if err != nil {
|
||||
return 0
|
||||
@@ -476,7 +431,7 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
||||
var total int64
|
||||
var wg sync.WaitGroup
|
||||
|
||||
// Limit concurrent subdirectory scans to avoid too many goroutines
|
||||
// Limit concurrent subdirectory scans.
|
||||
maxConcurrent := runtime.NumCPU() * 2
|
||||
if maxConcurrent > maxDirWorkers {
|
||||
maxConcurrent = maxDirWorkers
|
||||
@@ -486,9 +441,7 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
||||
for _, child := range children {
|
||||
fullPath := filepath.Join(root, child.Name())
|
||||
|
||||
// Skip symlinks to avoid following them into unexpected locations
|
||||
if child.Type()&fs.ModeSymlink != 0 {
|
||||
// For symlinks, just count their size without following
|
||||
info, err := child.Info()
|
||||
if err != nil {
|
||||
continue
|
||||
@@ -501,9 +454,7 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
||||
}
|
||||
|
||||
if child.IsDir() {
|
||||
// Check if this is a folded directory
|
||||
if shouldFoldDirWithPath(child.Name(), fullPath) {
|
||||
// Use du for folded directories (much faster)
|
||||
wg.Add(1)
|
||||
go func(path string) {
|
||||
defer wg.Done()
|
||||
@@ -517,7 +468,6 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
||||
continue
|
||||
}
|
||||
|
||||
// Recursively scan subdirectory in parallel
|
||||
wg.Add(1)
|
||||
go func(path string) {
|
||||
defer wg.Done()
|
||||
@@ -531,7 +481,6 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
||||
continue
|
||||
}
|
||||
|
||||
// Handle files
|
||||
info, err := child.Info()
|
||||
if err != nil {
|
||||
continue
|
||||
@@ -542,12 +491,11 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
||||
atomic.AddInt64(filesScanned, 1)
|
||||
atomic.AddInt64(bytesScanned, size)
|
||||
|
||||
// Track large files
|
||||
if !shouldSkipFileForLargeTracking(fullPath) && size >= minLargeFileSize {
|
||||
largeFileChan <- fileEntry{Name: child.Name(), Path: fullPath, Size: size}
|
||||
}
|
||||
|
||||
// Update current path occasionally to prevent UI jitter
|
||||
// Update current path occasionally to prevent UI jitter.
|
||||
if currentPath != nil && atomic.LoadInt64(filesScanned)%int64(batchUpdateSize) == 0 {
|
||||
*currentPath = fullPath
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// View renders the TUI display.
|
||||
// View renders the TUI.
|
||||
func (m model) View() string {
|
||||
var b strings.Builder
|
||||
fmt.Fprintln(&b)
|
||||
@@ -16,7 +16,6 @@ func (m model) View() string {
|
||||
if m.inOverviewMode() {
|
||||
fmt.Fprintf(&b, "%sAnalyze Disk%s\n", colorPurpleBold, colorReset)
|
||||
if m.overviewScanning {
|
||||
// Check if we're in initial scan (all entries are pending)
|
||||
allPending := true
|
||||
for _, entry := range m.entries {
|
||||
if entry.Size >= 0 {
|
||||
@@ -26,19 +25,16 @@ func (m model) View() string {
|
||||
}
|
||||
|
||||
if allPending {
|
||||
// Show prominent loading screen for initial scan
|
||||
fmt.Fprintf(&b, "%s%s%s%s Analyzing disk usage, please wait...%s\n",
|
||||
colorCyan, colorBold,
|
||||
spinnerFrames[m.spinner],
|
||||
colorReset, colorReset)
|
||||
return b.String()
|
||||
} else {
|
||||
// Progressive scanning - show subtle indicator
|
||||
fmt.Fprintf(&b, "%sSelect a location to explore:%s ", colorGray, colorReset)
|
||||
fmt.Fprintf(&b, "%s%s%s%s Scanning...\n\n", colorCyan, colorBold, spinnerFrames[m.spinner], colorReset)
|
||||
}
|
||||
} else {
|
||||
// Check if there are still pending items
|
||||
hasPending := false
|
||||
for _, entry := range m.entries {
|
||||
if entry.Size < 0 {
|
||||
@@ -62,7 +58,6 @@ func (m model) View() string {
|
||||
}
|
||||
|
||||
if m.deleting {
|
||||
// Show delete progress
|
||||
count := int64(0)
|
||||
if m.deleteCount != nil {
|
||||
count = atomic.LoadInt64(m.deleteCount)
|
||||
@@ -130,7 +125,6 @@ func (m model) View() string {
|
||||
sizeColor := colorGray
|
||||
numColor := ""
|
||||
|
||||
// Check if this item is multi-selected (by path, not index)
|
||||
isMultiSelected := m.largeMultiSelected != nil && m.largeMultiSelected[file.Path]
|
||||
selectIcon := "○"
|
||||
if isMultiSelected {
|
||||
@@ -164,8 +158,7 @@ func (m model) View() string {
|
||||
}
|
||||
}
|
||||
totalSize := m.totalSize
|
||||
// For overview mode, use a fixed small width since path names are short
|
||||
// (~/Downloads, ~/Library, etc. - max ~15 chars)
|
||||
// Overview paths are short; fixed width keeps layout stable.
|
||||
nameWidth := 20
|
||||
for idx, entry := range m.entries {
|
||||
icon := "📁"
|
||||
@@ -217,12 +210,10 @@ func (m model) View() string {
|
||||
}
|
||||
displayIndex := idx + 1
|
||||
|
||||
// Priority: cleanable > unused time
|
||||
var hintLabel string
|
||||
if entry.IsDir && isCleanableDir(entry.Path) {
|
||||
hintLabel = fmt.Sprintf("%s🧹%s", colorYellow, colorReset)
|
||||
} else {
|
||||
// For overview mode, get access time on-demand if not set
|
||||
lastAccess := entry.LastAccess
|
||||
if lastAccess.IsZero() && entry.Path != "" {
|
||||
lastAccess = getLastAccessTime(entry.Path)
|
||||
@@ -243,7 +234,6 @@ func (m model) View() string {
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Normal mode with sizes and progress bars
|
||||
maxSize := int64(1)
|
||||
for _, entry := range m.entries {
|
||||
if entry.Size > maxSize {
|
||||
@@ -272,14 +262,11 @@ func (m model) View() string {
|
||||
name := trimNameWithWidth(entry.Name, nameWidth)
|
||||
paddedName := padName(name, nameWidth)
|
||||
|
||||
// Calculate percentage
|
||||
percent := float64(entry.Size) / float64(m.totalSize) * 100
|
||||
percentStr := fmt.Sprintf("%5.1f%%", percent)
|
||||
|
||||
// Get colored progress bar
|
||||
bar := coloredProgressBar(entry.Size, maxSize, percent)
|
||||
|
||||
// Color the size based on magnitude
|
||||
var sizeColor string
|
||||
if percent >= 50 {
|
||||
sizeColor = colorRed
|
||||
@@ -291,7 +278,6 @@ func (m model) View() string {
|
||||
sizeColor = colorGray
|
||||
}
|
||||
|
||||
// Check if this item is multi-selected (by path, not index)
|
||||
isMultiSelected := m.multiSelected != nil && m.multiSelected[entry.Path]
|
||||
selectIcon := "○"
|
||||
nameColor := ""
|
||||
@@ -300,7 +286,6 @@ func (m model) View() string {
|
||||
nameColor = colorGreen
|
||||
}
|
||||
|
||||
// Keep chart columns aligned even when arrow is shown
|
||||
entryPrefix := " "
|
||||
nameSegment := fmt.Sprintf("%s %s", icon, paddedName)
|
||||
if nameColor != "" {
|
||||
@@ -320,12 +305,10 @@ func (m model) View() string {
|
||||
|
||||
displayIndex := idx + 1
|
||||
|
||||
// Priority: cleanable > unused time
|
||||
var hintLabel string
|
||||
if entry.IsDir && isCleanableDir(entry.Path) {
|
||||
hintLabel = fmt.Sprintf("%s🧹%s", colorYellow, colorReset)
|
||||
} else {
|
||||
// Get access time on-demand if not set
|
||||
lastAccess := entry.LastAccess
|
||||
if lastAccess.IsZero() && entry.Path != "" {
|
||||
lastAccess = getLastAccessTime(entry.Path)
|
||||
@@ -351,7 +334,6 @@ func (m model) View() string {
|
||||
|
||||
fmt.Fprintln(&b)
|
||||
if m.inOverviewMode() {
|
||||
// Show ← Back if there's history (entered from a parent directory)
|
||||
if len(m.history) > 0 {
|
||||
fmt.Fprintf(&b, "%s↑↓←→ | Enter | R Refresh | O Open | F File | ← Back | Q Quit%s\n", colorGray, colorReset)
|
||||
} else {
|
||||
@@ -383,12 +365,10 @@ func (m model) View() string {
|
||||
}
|
||||
if m.deleteConfirm && m.deleteTarget != nil {
|
||||
fmt.Fprintln(&b)
|
||||
// Show multi-selection delete info if applicable
|
||||
var deleteCount int
|
||||
var totalDeleteSize int64
|
||||
if m.showLargeFiles && len(m.largeMultiSelected) > 0 {
|
||||
deleteCount = len(m.largeMultiSelected)
|
||||
// Calculate total size by looking up each selected path
|
||||
for path := range m.largeMultiSelected {
|
||||
for _, file := range m.largeFiles {
|
||||
if file.Path == path {
|
||||
@@ -399,7 +379,6 @@ func (m model) View() string {
|
||||
}
|
||||
} else if !m.showLargeFiles && len(m.multiSelected) > 0 {
|
||||
deleteCount = len(m.multiSelected)
|
||||
// Calculate total size by looking up each selected path
|
||||
for path := range m.multiSelected {
|
||||
for _, entry := range m.entries {
|
||||
if entry.Path == path {
|
||||
@@ -425,27 +404,24 @@ func (m model) View() string {
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// calculateViewport computes the number of visible items based on terminal height.
|
||||
// calculateViewport returns visible rows for the current terminal height.
|
||||
func calculateViewport(termHeight int, isLargeFiles bool) int {
|
||||
if termHeight <= 0 {
|
||||
// Terminal height unknown, use default
|
||||
return defaultViewport
|
||||
}
|
||||
|
||||
// Calculate reserved space for UI elements
|
||||
reserved := 6 // header (3-4 lines) + footer (2 lines)
|
||||
reserved := 6 // Header + footer
|
||||
if isLargeFiles {
|
||||
reserved = 5 // Large files view has less overhead
|
||||
reserved = 5
|
||||
}
|
||||
|
||||
available := termHeight - reserved
|
||||
|
||||
// Ensure minimum and maximum bounds
|
||||
if available < 1 {
|
||||
return 1 // Minimum 1 line for very short terminals
|
||||
return 1
|
||||
}
|
||||
if available > 30 {
|
||||
return 30 // Maximum 30 lines to avoid information overload
|
||||
return 30
|
||||
}
|
||||
|
||||
return available
|
||||
|
||||
@@ -72,7 +72,7 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
m.metrics = msg.data
|
||||
m.lastUpdated = msg.data.CollectedAt
|
||||
m.collecting = false
|
||||
// Mark ready after first successful data collection
|
||||
// Mark ready after first successful data collection.
|
||||
if !m.ready {
|
||||
m.ready = true
|
||||
}
|
||||
@@ -126,7 +126,7 @@ func animTick() tea.Cmd {
|
||||
}
|
||||
|
||||
func animTickWithSpeed(cpuUsage float64) tea.Cmd {
|
||||
// Higher CPU = faster animation (50ms to 300ms)
|
||||
// Higher CPU = faster animation.
|
||||
interval := 300 - int(cpuUsage*2.5)
|
||||
if interval < 50 {
|
||||
interval = 50
|
||||
|
||||
@@ -141,16 +141,16 @@ type BluetoothDevice struct {
|
||||
}
|
||||
|
||||
type Collector struct {
|
||||
// Static Cache (Collected once at startup)
|
||||
// Static cache.
|
||||
cachedHW HardwareInfo
|
||||
lastHWAt time.Time
|
||||
hasStatic bool
|
||||
|
||||
// Slow Cache (Collected every 30s-1m)
|
||||
// Slow cache (30s-1m).
|
||||
lastBTAt time.Time
|
||||
lastBT []BluetoothDevice
|
||||
|
||||
// Fast Metrics (Collected every 1 second)
|
||||
// Fast metrics (1s).
|
||||
prevNet map[string]net.IOCountersStat
|
||||
lastNetAt time.Time
|
||||
lastGPUAt time.Time
|
||||
@@ -168,9 +168,7 @@ func NewCollector() *Collector {
|
||||
func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||
now := time.Now()
|
||||
|
||||
// Start host info collection early (it's fast but good to parallelize if possible,
|
||||
// but it returns a struct needed for result, so we can just run it here or in parallel)
|
||||
// host.Info is usually cached by gopsutil but let's just call it.
|
||||
// Host info is cached by gopsutil; fetch once.
|
||||
hostInfo, _ := host.Info()
|
||||
|
||||
var (
|
||||
@@ -192,7 +190,7 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||
topProcs []ProcessInfo
|
||||
)
|
||||
|
||||
// Helper to launch concurrent collection
|
||||
// Helper to launch concurrent collection.
|
||||
collect := func(fn func() error) {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
@@ -209,7 +207,7 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||
}()
|
||||
}
|
||||
|
||||
// Launch all independent collection tasks
|
||||
// Launch independent collection tasks.
|
||||
collect(func() (err error) { cpuStats, err = collectCPU(); return })
|
||||
collect(func() (err error) { memStats, err = collectMemory(); return })
|
||||
collect(func() (err error) { diskStats, err = collectDisks(); return })
|
||||
@@ -221,7 +219,7 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||
collect(func() (err error) { sensorStats, _ = collectSensors(); return nil })
|
||||
collect(func() (err error) { gpuStats, err = c.collectGPU(now); return })
|
||||
collect(func() (err error) {
|
||||
// Bluetooth is slow, cache for 30s
|
||||
// Bluetooth is slow; cache for 30s.
|
||||
if now.Sub(c.lastBTAt) > 30*time.Second || len(c.lastBT) == 0 {
|
||||
btStats = c.collectBluetooth(now)
|
||||
c.lastBT = btStats
|
||||
@@ -233,12 +231,11 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||
})
|
||||
collect(func() (err error) { topProcs = collectTopProcesses(); return nil })
|
||||
|
||||
// Wait for all to complete
|
||||
// Wait for all to complete.
|
||||
wg.Wait()
|
||||
|
||||
// Dependent tasks (must run after others)
|
||||
// Dependent tasks (must run after others)
|
||||
// Cache hardware info as it's expensive and rarely changes
|
||||
// Dependent tasks (post-collect).
|
||||
// Cache hardware info as it's expensive and rarely changes.
|
||||
if !c.hasStatic || now.Sub(c.lastHWAt) > 10*time.Minute {
|
||||
c.cachedHW = collectHardware(memStats.Total, diskStats)
|
||||
c.lastHWAt = now
|
||||
@@ -272,8 +269,6 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||
}, mergeErr
|
||||
}
|
||||
|
||||
// Utility functions
|
||||
|
||||
func runCmd(ctx context.Context, name string, args ...string) (string, error) {
|
||||
cmd := exec.CommandContext(ctx, name, args...)
|
||||
output, err := cmd.Output()
|
||||
@@ -289,11 +284,9 @@ func commandExists(name string) bool {
|
||||
}
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
// If LookPath panics due to permissions or platform quirks, act as if the command is missing.
|
||||
// Treat LookPath panics as "missing".
|
||||
}
|
||||
}()
|
||||
_, err := exec.LookPath(name)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// humanBytes is defined in view.go to avoid duplication
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
// Package-level cache for heavy system_profiler data
|
||||
// Cache for heavy system_profiler output.
|
||||
lastPowerAt time.Time
|
||||
cachedPower string
|
||||
powerCacheTTL = 30 * time.Second
|
||||
@@ -24,15 +24,15 @@ var (
|
||||
func collectBatteries() (batts []BatteryStatus, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
// Swallow panics from platform-specific battery probes to keep the UI alive.
|
||||
// Swallow panics to keep UI alive.
|
||||
err = fmt.Errorf("battery collection failed: %v", r)
|
||||
}
|
||||
}()
|
||||
|
||||
// macOS: pmset (fast, for real-time percentage/status)
|
||||
// macOS: pmset for real-time percentage/status.
|
||||
if runtime.GOOS == "darwin" && commandExists("pmset") {
|
||||
if out, err := runCmd(context.Background(), "pmset", "-g", "batt"); err == nil {
|
||||
// Get heavy info (health, cycles) from cached system_profiler
|
||||
// Health/cycles from cached system_profiler.
|
||||
health, cycles := getCachedPowerData()
|
||||
if batts := parsePMSet(out, health, cycles); len(batts) > 0 {
|
||||
return batts, nil
|
||||
@@ -40,7 +40,7 @@ func collectBatteries() (batts []BatteryStatus, err error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Linux: /sys/class/power_supply
|
||||
// Linux: /sys/class/power_supply.
|
||||
matches, _ := filepath.Glob("/sys/class/power_supply/BAT*/capacity")
|
||||
for _, capFile := range matches {
|
||||
statusFile := filepath.Join(filepath.Dir(capFile), "status")
|
||||
@@ -73,9 +73,8 @@ func parsePMSet(raw string, health string, cycles int) []BatteryStatus {
|
||||
var timeLeft string
|
||||
|
||||
for _, line := range lines {
|
||||
// Check for time remaining
|
||||
// Time remaining.
|
||||
if strings.Contains(line, "remaining") {
|
||||
// Extract time like "1:30 remaining"
|
||||
parts := strings.Fields(line)
|
||||
for i, p := range parts {
|
||||
if p == "remaining" && i > 0 {
|
||||
@@ -121,7 +120,7 @@ func parsePMSet(raw string, health string, cycles int) []BatteryStatus {
|
||||
return out
|
||||
}
|
||||
|
||||
// getCachedPowerData returns condition, cycles, and fan speed from cached system_profiler output.
|
||||
// getCachedPowerData returns condition and cycles from cached system_profiler.
|
||||
func getCachedPowerData() (health string, cycles int) {
|
||||
out := getSystemPowerOutput()
|
||||
if out == "" {
|
||||
@@ -173,7 +172,7 @@ func collectThermal() ThermalStatus {
|
||||
|
||||
var thermal ThermalStatus
|
||||
|
||||
// Get fan info and adapter power from cached system_profiler
|
||||
// Fan info from cached system_profiler.
|
||||
out := getSystemPowerOutput()
|
||||
if out != "" {
|
||||
lines := strings.Split(out, "\n")
|
||||
@@ -181,7 +180,6 @@ func collectThermal() ThermalStatus {
|
||||
lower := strings.ToLower(line)
|
||||
if strings.Contains(lower, "fan") && strings.Contains(lower, "speed") {
|
||||
if _, after, found := strings.Cut(line, ":"); found {
|
||||
// Extract number from string like "1200 RPM"
|
||||
numStr := strings.TrimSpace(after)
|
||||
numStr, _, _ = strings.Cut(numStr, " ")
|
||||
thermal.FanSpeed, _ = strconv.Atoi(numStr)
|
||||
@@ -190,7 +188,7 @@ func collectThermal() ThermalStatus {
|
||||
}
|
||||
}
|
||||
|
||||
// Get power metrics from ioreg (fast, real-time data)
|
||||
// Power metrics from ioreg (fast, real-time).
|
||||
ctxPower, cancelPower := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||
defer cancelPower()
|
||||
if out, err := runCmd(ctxPower, "ioreg", "-rn", "AppleSmartBattery"); err == nil {
|
||||
@@ -198,8 +196,7 @@ func collectThermal() ThermalStatus {
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
|
||||
// Get battery temperature
|
||||
// Matches: "Temperature" = 3055 (note: space before =)
|
||||
// Battery temperature ("Temperature" = 3055).
|
||||
if _, after, found := strings.Cut(line, "\"Temperature\" = "); found {
|
||||
valStr := strings.TrimSpace(after)
|
||||
if tempRaw, err := strconv.Atoi(valStr); err == nil && tempRaw > 0 {
|
||||
@@ -207,13 +204,10 @@ func collectThermal() ThermalStatus {
|
||||
}
|
||||
}
|
||||
|
||||
// Get adapter power (Watts)
|
||||
// Read from current adapter: "AdapterDetails" = {"Watts"=140...}
|
||||
// Skip historical data: "AppleRawAdapterDetails" = ({Watts=90}, {Watts=140})
|
||||
// Adapter power (Watts) from current adapter.
|
||||
if strings.Contains(line, "\"AdapterDetails\" = {") && !strings.Contains(line, "AppleRaw") {
|
||||
if _, after, found := strings.Cut(line, "\"Watts\"="); found {
|
||||
valStr := strings.TrimSpace(after)
|
||||
// Remove trailing characters like , or }
|
||||
valStr, _, _ = strings.Cut(valStr, ",")
|
||||
valStr, _, _ = strings.Cut(valStr, "}")
|
||||
valStr = strings.TrimSpace(valStr)
|
||||
@@ -223,8 +217,7 @@ func collectThermal() ThermalStatus {
|
||||
}
|
||||
}
|
||||
|
||||
// Get system power consumption (mW -> W)
|
||||
// Matches: "SystemPowerIn"=12345
|
||||
// System power consumption (mW -> W).
|
||||
if _, after, found := strings.Cut(line, "\"SystemPowerIn\"="); found {
|
||||
valStr := strings.TrimSpace(after)
|
||||
valStr, _, _ = strings.Cut(valStr, ",")
|
||||
@@ -235,8 +228,7 @@ func collectThermal() ThermalStatus {
|
||||
}
|
||||
}
|
||||
|
||||
// Get battery power (mW -> W, positive = discharging)
|
||||
// Matches: "BatteryPower"=12345
|
||||
// Battery power (mW -> W, positive = discharging).
|
||||
if _, after, found := strings.Cut(line, "\"BatteryPower\"="); found {
|
||||
valStr := strings.TrimSpace(after)
|
||||
valStr, _, _ = strings.Cut(valStr, ",")
|
||||
@@ -249,14 +241,13 @@ func collectThermal() ThermalStatus {
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: Try thermal level as a proxy if temperature not found
|
||||
// Fallback: thermal level proxy.
|
||||
if thermal.CPUTemp == 0 {
|
||||
ctx2, cancel2 := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||
defer cancel2()
|
||||
out2, err := runCmd(ctx2, "sysctl", "-n", "machdep.xcpm.cpu_thermal_level")
|
||||
if err == nil {
|
||||
level, _ := strconv.Atoi(strings.TrimSpace(out2))
|
||||
// Estimate temp: level 0-100 roughly maps to 40-100°C
|
||||
if level >= 0 {
|
||||
thermal.CPUTemp = 45 + float64(level)*0.5
|
||||
}
|
||||
|
||||
@@ -80,7 +80,7 @@ func parseSPBluetooth(raw string) []BluetoothDevice {
|
||||
continue
|
||||
}
|
||||
if !strings.HasPrefix(line, " ") && strings.HasSuffix(trim, ":") {
|
||||
// Reset at top-level sections
|
||||
// Reset at top-level sections.
|
||||
currentName = ""
|
||||
connected = false
|
||||
battery = ""
|
||||
|
||||
@@ -31,12 +31,9 @@ func collectCPU() (CPUStatus, error) {
|
||||
logical = 1
|
||||
}
|
||||
|
||||
// Use two-call pattern for more reliable CPU measurements
|
||||
// First call: initialize/store current CPU times
|
||||
// Two-call pattern for more reliable CPU usage.
|
||||
cpu.Percent(0, true)
|
||||
// Wait for sampling interval
|
||||
time.Sleep(cpuSampleInterval)
|
||||
// Second call: get actual percentages based on difference
|
||||
percents, err := cpu.Percent(0, true)
|
||||
var totalPercent float64
|
||||
perCoreEstimated := false
|
||||
@@ -69,7 +66,7 @@ func collectCPU() (CPUStatus, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Get P-core and E-core counts for Apple Silicon
|
||||
// P/E core counts for Apple Silicon.
|
||||
pCores, eCores := getCoreTopology()
|
||||
|
||||
return CPUStatus{
|
||||
@@ -91,14 +88,13 @@ func isZeroLoad(avg load.AvgStat) bool {
|
||||
}
|
||||
|
||||
var (
|
||||
// Package-level cache for core topology
|
||||
// Cache for core topology.
|
||||
lastTopologyAt time.Time
|
||||
cachedP, cachedE int
|
||||
topologyTTL = 10 * time.Minute
|
||||
)
|
||||
|
||||
// getCoreTopology returns P-core and E-core counts on Apple Silicon.
|
||||
// Returns (0, 0) on non-Apple Silicon or if detection fails.
|
||||
// getCoreTopology returns P/E core counts on Apple Silicon.
|
||||
func getCoreTopology() (pCores, eCores int) {
|
||||
if runtime.GOOS != "darwin" {
|
||||
return 0, 0
|
||||
@@ -114,7 +110,6 @@ func getCoreTopology() (pCores, eCores int) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||
defer cancel()
|
||||
|
||||
// Get performance level info from sysctl
|
||||
out, err := runCmd(ctx, "sysctl", "-n",
|
||||
"hw.perflevel0.logicalcpu",
|
||||
"hw.perflevel0.name",
|
||||
@@ -129,15 +124,12 @@ func getCoreTopology() (pCores, eCores int) {
|
||||
return 0, 0
|
||||
}
|
||||
|
||||
// Parse perflevel0
|
||||
level0Count, _ := strconv.Atoi(strings.TrimSpace(lines[0]))
|
||||
level0Name := strings.ToLower(strings.TrimSpace(lines[1]))
|
||||
|
||||
// Parse perflevel1
|
||||
level1Count, _ := strconv.Atoi(strings.TrimSpace(lines[2]))
|
||||
level1Name := strings.ToLower(strings.TrimSpace(lines[3]))
|
||||
|
||||
// Assign based on name (Performance vs Efficiency)
|
||||
if strings.Contains(level0Name, "performance") {
|
||||
pCores = level0Count
|
||||
} else if strings.Contains(level0Name, "efficiency") {
|
||||
|
||||
@@ -43,7 +43,7 @@ func collectDisks() ([]DiskStatus, error) {
|
||||
if strings.HasPrefix(part.Mountpoint, "/System/Volumes/") {
|
||||
continue
|
||||
}
|
||||
// Skip private volumes
|
||||
// Skip /private mounts.
|
||||
if strings.HasPrefix(part.Mountpoint, "/private/") {
|
||||
continue
|
||||
}
|
||||
@@ -58,12 +58,11 @@ func collectDisks() ([]DiskStatus, error) {
|
||||
if err != nil || usage.Total == 0 {
|
||||
continue
|
||||
}
|
||||
// Skip small volumes (< 1GB)
|
||||
// Skip <1GB volumes.
|
||||
if usage.Total < 1<<30 {
|
||||
continue
|
||||
}
|
||||
// For APFS volumes, use a more precise dedup key (bytes level)
|
||||
// to handle shared storage pools properly
|
||||
// Use size-based dedupe key for shared pools.
|
||||
volKey := fmt.Sprintf("%s:%d", part.Fstype, usage.Total)
|
||||
if seenVolume[volKey] {
|
||||
continue
|
||||
@@ -94,7 +93,7 @@ func collectDisks() ([]DiskStatus, error) {
|
||||
}
|
||||
|
||||
var (
|
||||
// Package-level cache for external disk status
|
||||
// External disk cache.
|
||||
lastDiskCacheAt time.Time
|
||||
diskTypeCache = make(map[string]bool)
|
||||
diskCacheTTL = 2 * time.Minute
|
||||
@@ -106,7 +105,7 @@ func annotateDiskTypes(disks []DiskStatus) {
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
// Clear cache if stale
|
||||
// Clear stale cache.
|
||||
if now.Sub(lastDiskCacheAt) > diskCacheTTL {
|
||||
diskTypeCache = make(map[string]bool)
|
||||
lastDiskCacheAt = now
|
||||
|
||||
@@ -17,7 +17,7 @@ const (
|
||||
powermetricsTimeout = 2 * time.Second
|
||||
)
|
||||
|
||||
// Pre-compiled regex patterns for GPU usage parsing
|
||||
// Regex for GPU usage parsing.
|
||||
var (
|
||||
gpuActiveResidencyRe = regexp.MustCompile(`GPU HW active residency:\s+([\d.]+)%`)
|
||||
gpuIdleResidencyRe = regexp.MustCompile(`GPU idle residency:\s+([\d.]+)%`)
|
||||
@@ -25,7 +25,7 @@ var (
|
||||
|
||||
func (c *Collector) collectGPU(now time.Time) ([]GPUStatus, error) {
|
||||
if runtime.GOOS == "darwin" {
|
||||
// Get static GPU info (cached for 10 min)
|
||||
// Static GPU info (cached 10 min).
|
||||
if len(c.cachedGPU) == 0 || c.lastGPUAt.IsZero() || now.Sub(c.lastGPUAt) >= macGPUInfoTTL {
|
||||
if gpus, err := readMacGPUInfo(); err == nil && len(gpus) > 0 {
|
||||
c.cachedGPU = gpus
|
||||
@@ -33,12 +33,12 @@ func (c *Collector) collectGPU(now time.Time) ([]GPUStatus, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Get real-time GPU usage
|
||||
// Real-time GPU usage.
|
||||
if len(c.cachedGPU) > 0 {
|
||||
usage := getMacGPUUsage()
|
||||
result := make([]GPUStatus, len(c.cachedGPU))
|
||||
copy(result, c.cachedGPU)
|
||||
// Apply usage to first GPU (Apple Silicon has one integrated GPU)
|
||||
// Apply usage to first GPU (Apple Silicon).
|
||||
if len(result) > 0 {
|
||||
result[0].Usage = usage
|
||||
}
|
||||
@@ -152,19 +152,18 @@ func readMacGPUInfo() ([]GPUStatus, error) {
|
||||
return gpus, nil
|
||||
}
|
||||
|
||||
// getMacGPUUsage gets GPU active residency from powermetrics.
|
||||
// Returns -1 if unavailable (e.g., not running as root).
|
||||
// getMacGPUUsage reads GPU active residency from powermetrics.
|
||||
func getMacGPUUsage() float64 {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), powermetricsTimeout)
|
||||
defer cancel()
|
||||
|
||||
// powermetrics requires root, but we try anyway - some systems may have it enabled
|
||||
// powermetrics may require root.
|
||||
out, err := runCmd(ctx, "powermetrics", "--samplers", "gpu_power", "-i", "500", "-n", "1")
|
||||
if err != nil {
|
||||
return -1
|
||||
}
|
||||
|
||||
// Parse "GPU HW active residency: X.XX%"
|
||||
// Parse "GPU HW active residency: X.XX%".
|
||||
matches := gpuActiveResidencyRe.FindStringSubmatch(out)
|
||||
if len(matches) >= 2 {
|
||||
usage, err := strconv.ParseFloat(matches[1], 64)
|
||||
@@ -173,7 +172,7 @@ func getMacGPUUsage() float64 {
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: parse "GPU idle residency: X.XX%" and calculate active
|
||||
// Fallback: parse idle residency and derive active.
|
||||
matchesIdle := gpuIdleResidencyRe.FindStringSubmatch(out)
|
||||
if len(matchesIdle) >= 2 {
|
||||
idle, err := strconv.ParseFloat(matchesIdle[1], 64)
|
||||
|
||||
@@ -18,19 +18,18 @@ func collectHardware(totalRAM uint64, disks []DiskStatus) HardwareInfo {
|
||||
}
|
||||
}
|
||||
|
||||
// Get model and CPU from system_profiler
|
||||
// Model and CPU from system_profiler.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
|
||||
defer cancel()
|
||||
|
||||
var model, cpuModel, osVersion string
|
||||
|
||||
// Get hardware overview
|
||||
out, err := runCmd(ctx, "system_profiler", "SPHardwareDataType")
|
||||
if err == nil {
|
||||
lines := strings.Split(out, "\n")
|
||||
for _, line := range lines {
|
||||
lower := strings.ToLower(strings.TrimSpace(line))
|
||||
// Prefer "Model Name" over "Model Identifier"
|
||||
// Prefer "Model Name" over "Model Identifier".
|
||||
if strings.Contains(lower, "model name:") {
|
||||
parts := strings.Split(line, ":")
|
||||
if len(parts) == 2 {
|
||||
@@ -52,7 +51,6 @@ func collectHardware(totalRAM uint64, disks []DiskStatus) HardwareInfo {
|
||||
}
|
||||
}
|
||||
|
||||
// Get macOS version
|
||||
ctx2, cancel2 := context.WithTimeout(context.Background(), 1*time.Second)
|
||||
defer cancel2()
|
||||
out2, err := runCmd(ctx2, "sw_vers", "-productVersion")
|
||||
@@ -60,7 +58,6 @@ func collectHardware(totalRAM uint64, disks []DiskStatus) HardwareInfo {
|
||||
osVersion = "macOS " + strings.TrimSpace(out2)
|
||||
}
|
||||
|
||||
// Get disk size
|
||||
diskSize := "Unknown"
|
||||
if len(disks) > 0 {
|
||||
diskSize = humanBytes(disks[0].Total)
|
||||
|
||||
@@ -5,45 +5,43 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Health score calculation weights and thresholds
|
||||
// Health score weights and thresholds.
|
||||
const (
|
||||
// Weights (must sum to ~100 for total score)
|
||||
// Weights.
|
||||
healthCPUWeight = 30.0
|
||||
healthMemWeight = 25.0
|
||||
healthDiskWeight = 20.0
|
||||
healthThermalWeight = 15.0
|
||||
healthIOWeight = 10.0
|
||||
|
||||
// CPU thresholds
|
||||
// CPU.
|
||||
cpuNormalThreshold = 30.0
|
||||
cpuHighThreshold = 70.0
|
||||
|
||||
// Memory thresholds
|
||||
// Memory.
|
||||
memNormalThreshold = 50.0
|
||||
memHighThreshold = 80.0
|
||||
memPressureWarnPenalty = 5.0
|
||||
memPressureCritPenalty = 15.0
|
||||
|
||||
// Disk thresholds
|
||||
// Disk.
|
||||
diskWarnThreshold = 70.0
|
||||
diskCritThreshold = 90.0
|
||||
|
||||
// Thermal thresholds
|
||||
// Thermal.
|
||||
thermalNormalThreshold = 60.0
|
||||
thermalHighThreshold = 85.0
|
||||
|
||||
// Disk IO thresholds (MB/s)
|
||||
// Disk IO (MB/s).
|
||||
ioNormalThreshold = 50.0
|
||||
ioHighThreshold = 150.0
|
||||
)
|
||||
|
||||
func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, diskIO DiskIOStatus, thermal ThermalStatus) (int, string) {
|
||||
// Start with perfect score
|
||||
score := 100.0
|
||||
issues := []string{}
|
||||
|
||||
// CPU Usage (30% weight) - deduct up to 30 points
|
||||
// 0-30% CPU = 0 deduction, 30-70% = linear, 70-100% = heavy penalty
|
||||
// CPU penalty.
|
||||
cpuPenalty := 0.0
|
||||
if cpu.Usage > cpuNormalThreshold {
|
||||
if cpu.Usage > cpuHighThreshold {
|
||||
@@ -57,8 +55,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
issues = append(issues, "High CPU")
|
||||
}
|
||||
|
||||
// Memory Usage (25% weight) - deduct up to 25 points
|
||||
// 0-50% = 0 deduction, 50-80% = linear, 80-100% = heavy penalty
|
||||
// Memory penalty.
|
||||
memPenalty := 0.0
|
||||
if mem.UsedPercent > memNormalThreshold {
|
||||
if mem.UsedPercent > memHighThreshold {
|
||||
@@ -72,7 +69,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
issues = append(issues, "High Memory")
|
||||
}
|
||||
|
||||
// Memory Pressure (extra penalty)
|
||||
// Memory pressure penalty.
|
||||
if mem.Pressure == "warn" {
|
||||
score -= memPressureWarnPenalty
|
||||
issues = append(issues, "Memory Pressure")
|
||||
@@ -81,7 +78,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
issues = append(issues, "Critical Memory")
|
||||
}
|
||||
|
||||
// Disk Usage (20% weight) - deduct up to 20 points
|
||||
// Disk penalty.
|
||||
diskPenalty := 0.0
|
||||
if len(disks) > 0 {
|
||||
diskUsage := disks[0].UsedPercent
|
||||
@@ -98,7 +95,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
}
|
||||
}
|
||||
|
||||
// Thermal (15% weight) - deduct up to 15 points
|
||||
// Thermal penalty.
|
||||
thermalPenalty := 0.0
|
||||
if thermal.CPUTemp > 0 {
|
||||
if thermal.CPUTemp > thermalNormalThreshold {
|
||||
@@ -112,7 +109,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
score -= thermalPenalty
|
||||
}
|
||||
|
||||
// Disk IO (10% weight) - deduct up to 10 points
|
||||
// Disk IO penalty.
|
||||
ioPenalty := 0.0
|
||||
totalIO := diskIO.ReadRate + diskIO.WriteRate
|
||||
if totalIO > ioNormalThreshold {
|
||||
@@ -125,7 +122,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
}
|
||||
score -= ioPenalty
|
||||
|
||||
// Ensure score is in valid range
|
||||
// Clamp score.
|
||||
if score < 0 {
|
||||
score = 0
|
||||
}
|
||||
@@ -133,7 +130,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
||||
score = 100
|
||||
}
|
||||
|
||||
// Generate message
|
||||
// Build message.
|
||||
msg := "Excellent"
|
||||
if score >= 90 {
|
||||
msg = "Excellent"
|
||||
|
||||
@@ -17,7 +17,7 @@ func (c *Collector) collectNetwork(now time.Time) ([]NetworkStatus, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Get IP addresses for interfaces
|
||||
// Map interface IPs.
|
||||
ifAddrs := getInterfaceIPs()
|
||||
|
||||
if c.lastNetAt.IsZero() {
|
||||
@@ -81,7 +81,7 @@ func getInterfaceIPs() map[string]string {
|
||||
}
|
||||
for _, iface := range ifaces {
|
||||
for _, addr := range iface.Addrs {
|
||||
// Only IPv4
|
||||
// IPv4 only.
|
||||
if strings.Contains(addr.Addr, ".") && !strings.HasPrefix(addr.Addr, "127.") {
|
||||
ip := strings.Split(addr.Addr, "/")[0]
|
||||
result[iface.Name] = ip
|
||||
@@ -104,14 +104,14 @@ func isNoiseInterface(name string) bool {
|
||||
}
|
||||
|
||||
func collectProxy() ProxyStatus {
|
||||
// Check environment variables first
|
||||
// Check environment variables first.
|
||||
for _, env := range []string{"https_proxy", "HTTPS_PROXY", "http_proxy", "HTTP_PROXY"} {
|
||||
if val := os.Getenv(env); val != "" {
|
||||
proxyType := "HTTP"
|
||||
if strings.HasPrefix(val, "socks") {
|
||||
proxyType = "SOCKS"
|
||||
}
|
||||
// Extract host
|
||||
// Extract host.
|
||||
host := val
|
||||
if strings.Contains(host, "://") {
|
||||
host = strings.SplitN(host, "://", 2)[1]
|
||||
@@ -123,7 +123,7 @@ func collectProxy() ProxyStatus {
|
||||
}
|
||||
}
|
||||
|
||||
// macOS: check system proxy via scutil
|
||||
// macOS: check system proxy via scutil.
|
||||
if runtime.GOOS == "darwin" {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||
defer cancel()
|
||||
|
||||
@@ -15,7 +15,7 @@ func collectTopProcesses() []ProcessInfo {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Use ps to get top processes by CPU
|
||||
// Use ps to get top processes by CPU.
|
||||
out, err := runCmd(ctx, "ps", "-Aceo", "pcpu,pmem,comm", "-r")
|
||||
if err != nil {
|
||||
return nil
|
||||
@@ -24,10 +24,10 @@ func collectTopProcesses() []ProcessInfo {
|
||||
lines := strings.Split(strings.TrimSpace(out), "\n")
|
||||
var procs []ProcessInfo
|
||||
for i, line := range lines {
|
||||
if i == 0 { // skip header
|
||||
if i == 0 {
|
||||
continue
|
||||
}
|
||||
if i > 5 { // top 5
|
||||
if i > 5 {
|
||||
break
|
||||
}
|
||||
fields := strings.Fields(line)
|
||||
@@ -37,7 +37,7 @@ func collectTopProcesses() []ProcessInfo {
|
||||
cpuVal, _ := strconv.ParseFloat(fields[0], 64)
|
||||
memVal, _ := strconv.ParseFloat(fields[1], 64)
|
||||
name := fields[len(fields)-1]
|
||||
// Get just the process name without path
|
||||
// Strip path from command name.
|
||||
if idx := strings.LastIndex(name, "/"); idx >= 0 {
|
||||
name = name[idx+1:]
|
||||
}
|
||||
|
||||
@@ -33,7 +33,7 @@ const (
|
||||
iconProcs = "❊"
|
||||
)
|
||||
|
||||
// Check if it's Christmas season (Dec 10-31)
|
||||
// isChristmasSeason reports Dec 10-31.
|
||||
func isChristmasSeason() bool {
|
||||
now := time.Now()
|
||||
month := now.Month()
|
||||
@@ -41,7 +41,7 @@ func isChristmasSeason() bool {
|
||||
return month == time.December && day >= 10 && day <= 31
|
||||
}
|
||||
|
||||
// Mole body frames (legs animate)
|
||||
// Mole body frames.
|
||||
var moleBody = [][]string{
|
||||
{
|
||||
` /\_/\`,
|
||||
@@ -69,7 +69,7 @@ var moleBody = [][]string{
|
||||
},
|
||||
}
|
||||
|
||||
// Mole body frames with Christmas hat
|
||||
// Mole body frames with Christmas hat.
|
||||
var moleBodyWithHat = [][]string{
|
||||
{
|
||||
` *`,
|
||||
@@ -105,7 +105,7 @@ var moleBodyWithHat = [][]string{
|
||||
},
|
||||
}
|
||||
|
||||
// Generate frames with horizontal movement
|
||||
// getMoleFrame renders the animated mole.
|
||||
func getMoleFrame(animFrame int, termWidth int) string {
|
||||
var body []string
|
||||
var bodyIdx int
|
||||
@@ -119,15 +119,12 @@ func getMoleFrame(animFrame int, termWidth int) string {
|
||||
body = moleBody[bodyIdx]
|
||||
}
|
||||
|
||||
// Calculate mole width (approximate)
|
||||
moleWidth := 15
|
||||
// Move across terminal width
|
||||
maxPos := termWidth - moleWidth
|
||||
if maxPos < 0 {
|
||||
maxPos = 0
|
||||
}
|
||||
|
||||
// Move position: 0 -> maxPos -> 0
|
||||
cycleLength := maxPos * 2
|
||||
if cycleLength == 0 {
|
||||
cycleLength = 1
|
||||
@@ -141,7 +138,6 @@ func getMoleFrame(animFrame int, termWidth int) string {
|
||||
var lines []string
|
||||
|
||||
if isChristmas {
|
||||
// Render with red hat on first 3 lines
|
||||
for i, line := range body {
|
||||
if i < 3 {
|
||||
lines = append(lines, padding+hatStyle.Render(line))
|
||||
@@ -165,27 +161,24 @@ type cardData struct {
|
||||
}
|
||||
|
||||
func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int) string {
|
||||
// Title
|
||||
title := titleStyle.Render("Mole Status")
|
||||
|
||||
// Health Score
|
||||
scoreStyle := getScoreStyle(m.HealthScore)
|
||||
scoreText := subtleStyle.Render("Health ") + scoreStyle.Render(fmt.Sprintf("● %d", m.HealthScore))
|
||||
|
||||
// Hardware info - compact for single line
|
||||
// Hardware info for a single line.
|
||||
infoParts := []string{}
|
||||
if m.Hardware.Model != "" {
|
||||
infoParts = append(infoParts, primaryStyle.Render(m.Hardware.Model))
|
||||
}
|
||||
if m.Hardware.CPUModel != "" {
|
||||
cpuInfo := m.Hardware.CPUModel
|
||||
// Add GPU core count if available (compact format)
|
||||
// Append GPU core count when available.
|
||||
if len(m.GPU) > 0 && m.GPU[0].CoreCount > 0 {
|
||||
cpuInfo += fmt.Sprintf(" (%dGPU)", m.GPU[0].CoreCount)
|
||||
}
|
||||
infoParts = append(infoParts, cpuInfo)
|
||||
}
|
||||
// Combine RAM and Disk to save space
|
||||
var specs []string
|
||||
if m.Hardware.TotalRAM != "" {
|
||||
specs = append(specs, m.Hardware.TotalRAM)
|
||||
@@ -200,10 +193,8 @@ func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int
|
||||
infoParts = append(infoParts, m.Hardware.OSVersion)
|
||||
}
|
||||
|
||||
// Single line compact header
|
||||
headerLine := title + " " + scoreText + " " + subtleStyle.Render(strings.Join(infoParts, " · "))
|
||||
|
||||
// Running mole animation
|
||||
mole := getMoleFrame(animFrame, termWidth)
|
||||
|
||||
if errMsg != "" {
|
||||
@@ -214,19 +205,14 @@ func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int
|
||||
|
||||
func getScoreStyle(score int) lipgloss.Style {
|
||||
if score >= 90 {
|
||||
// Excellent - Bright Green
|
||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#87FF87")).Bold(true)
|
||||
} else if score >= 75 {
|
||||
// Good - Green
|
||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#87D787")).Bold(true)
|
||||
} else if score >= 60 {
|
||||
// Fair - Yellow
|
||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FFD75F")).Bold(true)
|
||||
} else if score >= 40 {
|
||||
// Poor - Orange
|
||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FFAF5F")).Bold(true)
|
||||
} else {
|
||||
// Critical - Red
|
||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FF6B6B")).Bold(true)
|
||||
}
|
||||
}
|
||||
@@ -240,7 +226,6 @@ func buildCards(m MetricsSnapshot, _ int) []cardData {
|
||||
renderProcessCard(m.TopProcesses),
|
||||
renderNetworkCard(m.Network, m.Proxy),
|
||||
}
|
||||
// Only show sensors if we have valid temperature readings
|
||||
if hasSensorData(m.Sensors) {
|
||||
cards = append(cards, renderSensorsCard(m.Sensors))
|
||||
}
|
||||
@@ -334,7 +319,7 @@ func renderMemoryCard(mem MemoryStatus) cardData {
|
||||
} else {
|
||||
lines = append(lines, fmt.Sprintf("Swap %s", subtleStyle.Render("not in use")))
|
||||
}
|
||||
// Memory pressure
|
||||
// Memory pressure status.
|
||||
if mem.Pressure != "" {
|
||||
pressureStyle := okStyle
|
||||
pressureText := "Status " + mem.Pressure
|
||||
@@ -405,7 +390,6 @@ func formatDiskLine(label string, d DiskStatus) string {
|
||||
}
|
||||
|
||||
func ioBar(rate float64) string {
|
||||
// Scale: 0-50 MB/s maps to 0-5 blocks
|
||||
filled := int(rate / 10.0)
|
||||
if filled > 5 {
|
||||
filled = 5
|
||||
@@ -441,7 +425,7 @@ func renderProcessCard(procs []ProcessInfo) cardData {
|
||||
}
|
||||
|
||||
func miniBar(percent float64) string {
|
||||
filled := int(percent / 20) // 5 chars max for 100%
|
||||
filled := int(percent / 20)
|
||||
if filled > 5 {
|
||||
filled = 5
|
||||
}
|
||||
@@ -471,7 +455,7 @@ func renderNetworkCard(netStats []NetworkStatus, proxy ProxyStatus) cardData {
|
||||
txBar := netBar(totalTx)
|
||||
lines = append(lines, fmt.Sprintf("Down %s %s", rxBar, formatRate(totalRx)))
|
||||
lines = append(lines, fmt.Sprintf("Up %s %s", txBar, formatRate(totalTx)))
|
||||
// Show proxy and IP in one line
|
||||
// Show proxy and IP on one line.
|
||||
var infoParts []string
|
||||
if proxy.Enabled {
|
||||
infoParts = append(infoParts, "Proxy "+proxy.Type)
|
||||
@@ -487,7 +471,6 @@ func renderNetworkCard(netStats []NetworkStatus, proxy ProxyStatus) cardData {
|
||||
}
|
||||
|
||||
func netBar(rate float64) string {
|
||||
// Scale: 0-10 MB/s maps to 0-5 blocks
|
||||
filled := int(rate / 2.0)
|
||||
if filled > 5 {
|
||||
filled = 5
|
||||
@@ -511,8 +494,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
||||
lines = append(lines, subtleStyle.Render("No battery"))
|
||||
} else {
|
||||
b := batts[0]
|
||||
// Line 1: label + bar + percentage (consistent with other cards)
|
||||
// Only show red when battery is critically low
|
||||
statusLower := strings.ToLower(b.Status)
|
||||
percentText := fmt.Sprintf("%5.1f%%", b.Percent)
|
||||
if b.Percent < 20 && statusLower != "charging" && statusLower != "charged" {
|
||||
@@ -520,7 +501,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
||||
}
|
||||
lines = append(lines, fmt.Sprintf("Level %s %s", batteryProgressBar(b.Percent), percentText))
|
||||
|
||||
// Line 2: status with power info
|
||||
statusIcon := ""
|
||||
statusStyle := subtleStyle
|
||||
if statusLower == "charging" || statusLower == "charged" {
|
||||
@@ -529,7 +509,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
||||
} else if b.Percent < 20 {
|
||||
statusStyle = dangerStyle
|
||||
}
|
||||
// Capitalize first letter
|
||||
statusText := b.Status
|
||||
if len(statusText) > 0 {
|
||||
statusText = strings.ToUpper(statusText[:1]) + strings.ToLower(statusText[1:])
|
||||
@@ -537,21 +516,18 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
||||
if b.TimeLeft != "" {
|
||||
statusText += " · " + b.TimeLeft
|
||||
}
|
||||
// Add power information
|
||||
// Add power info.
|
||||
if statusLower == "charging" || statusLower == "charged" {
|
||||
// AC powered - show system power consumption
|
||||
if thermal.SystemPower > 0 {
|
||||
statusText += fmt.Sprintf(" · %.0fW", thermal.SystemPower)
|
||||
} else if thermal.AdapterPower > 0 {
|
||||
statusText += fmt.Sprintf(" · %.0fW Adapter", thermal.AdapterPower)
|
||||
}
|
||||
} else if thermal.BatteryPower > 0 {
|
||||
// Battery powered - show discharge rate
|
||||
statusText += fmt.Sprintf(" · %.0fW", thermal.BatteryPower)
|
||||
}
|
||||
lines = append(lines, statusStyle.Render(statusText+statusIcon))
|
||||
|
||||
// Line 3: Health + cycles + temp
|
||||
healthParts := []string{}
|
||||
if b.Health != "" {
|
||||
healthParts = append(healthParts, b.Health)
|
||||
@@ -560,7 +536,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
||||
healthParts = append(healthParts, fmt.Sprintf("%d cycles", b.CycleCount))
|
||||
}
|
||||
|
||||
// Add temperature if available
|
||||
if thermal.CPUTemp > 0 {
|
||||
tempStyle := subtleStyle
|
||||
if thermal.CPUTemp > 80 {
|
||||
@@ -571,7 +546,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
||||
healthParts = append(healthParts, tempStyle.Render(fmt.Sprintf("%.0f°C", thermal.CPUTemp)))
|
||||
}
|
||||
|
||||
// Add fan speed if available
|
||||
if thermal.FanSpeed > 0 {
|
||||
healthParts = append(healthParts, fmt.Sprintf("%d RPM", thermal.FanSpeed))
|
||||
}
|
||||
@@ -607,7 +581,6 @@ func renderCard(data cardData, width int, height int) string {
|
||||
header := titleStyle.Render(titleText) + " " + lineStyle.Render(strings.Repeat("╌", lineLen))
|
||||
content := header + "\n" + strings.Join(data.lines, "\n")
|
||||
|
||||
// Pad to target height
|
||||
lines := strings.Split(content, "\n")
|
||||
for len(lines) < height {
|
||||
lines = append(lines, "")
|
||||
@@ -780,7 +753,6 @@ func renderTwoColumns(cards []cardData, width int) string {
|
||||
}
|
||||
}
|
||||
|
||||
// Add empty lines between rows for separation
|
||||
var spacedRows []string
|
||||
for i, r := range rows {
|
||||
if i > 0 {
|
||||
|
||||
81
install.sh
81
install.sh
@@ -1,16 +1,16 @@
|
||||
#!/bin/bash
|
||||
# Mole Installation Script
|
||||
# Mole - Installer for manual installs.
|
||||
# Fetches source/binaries and installs to prefix.
|
||||
# Supports update and edge installs.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
BLUE='\033[0;34m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Simple spinner
|
||||
_SPINNER_PID=""
|
||||
start_line_spinner() {
|
||||
local msg="$1"
|
||||
@@ -36,17 +36,15 @@ stop_line_spinner() { if [[ -n "$_SPINNER_PID" ]]; then
|
||||
printf "\r\033[K"
|
||||
fi; }
|
||||
|
||||
# Verbosity (0 = quiet, 1 = verbose)
|
||||
VERBOSE=1
|
||||
|
||||
# Icons (duplicated from lib/core/common.sh - necessary as install.sh runs standalone)
|
||||
# Note: Don't use 'readonly' here to avoid conflicts when sourcing common.sh later
|
||||
# Icons duplicated from lib/core/common.sh (install.sh runs standalone).
|
||||
# Avoid readonly to prevent conflicts when sourcing common.sh later.
|
||||
ICON_SUCCESS="✓"
|
||||
ICON_ADMIN="●"
|
||||
ICON_CONFIRM="◎"
|
||||
ICON_ERROR="☻"
|
||||
|
||||
# Logging functions
|
||||
log_info() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}$1${NC}"; }
|
||||
log_success() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${GREEN}${ICON_SUCCESS}${NC} $1"; }
|
||||
log_warning() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${YELLOW}$1${NC}"; }
|
||||
@@ -54,21 +52,18 @@ log_error() { echo -e "${YELLOW}${ICON_ERROR}${NC} $1"; }
|
||||
log_admin() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}${ICON_ADMIN}${NC} $1"; }
|
||||
log_confirm() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}${ICON_CONFIRM}${NC} $1"; }
|
||||
|
||||
# Default installation directory
|
||||
# Install defaults
|
||||
INSTALL_DIR="/usr/local/bin"
|
||||
CONFIG_DIR="$HOME/.config/mole"
|
||||
SOURCE_DIR=""
|
||||
|
||||
# Default action (install|update)
|
||||
ACTION="install"
|
||||
|
||||
# Check if we need sudo for install directory operations
|
||||
# Resolve source dir (local checkout, env override, or download).
|
||||
needs_sudo() {
|
||||
[[ ! -w "$INSTALL_DIR" ]]
|
||||
}
|
||||
|
||||
# Execute command with sudo if needed
|
||||
# Usage: maybe_sudo cp source dest
|
||||
maybe_sudo() {
|
||||
if needs_sudo; then
|
||||
sudo "$@"
|
||||
@@ -77,13 +72,11 @@ maybe_sudo() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Resolve the directory containing source files (supports curl | bash)
|
||||
resolve_source_dir() {
|
||||
if [[ -n "$SOURCE_DIR" && -d "$SOURCE_DIR" && -f "$SOURCE_DIR/mole" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# 1) If script is on disk, use its directory (only when mole executable present)
|
||||
if [[ -n "${BASH_SOURCE[0]:-}" && -f "${BASH_SOURCE[0]}" ]]; then
|
||||
local script_dir
|
||||
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
@@ -93,16 +86,13 @@ resolve_source_dir() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# 2) If CLEAN_SOURCE_DIR env is provided, honor it
|
||||
if [[ -n "${CLEAN_SOURCE_DIR:-}" && -d "$CLEAN_SOURCE_DIR" && -f "$CLEAN_SOURCE_DIR/mole" ]]; then
|
||||
SOURCE_DIR="$CLEAN_SOURCE_DIR"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# 3) Fallback: fetch repository to a temp directory (works for curl | bash)
|
||||
local tmp
|
||||
tmp="$(mktemp -d)"
|
||||
# Expand tmp now so trap doesn't depend on local scope
|
||||
trap "stop_line_spinner 2>/dev/null; rm -rf '$tmp'" EXIT
|
||||
|
||||
local branch="${MOLE_VERSION:-}"
|
||||
@@ -120,7 +110,6 @@ resolve_source_dir() {
|
||||
fi
|
||||
local url="https://github.com/tw93/mole/archive/refs/heads/main.tar.gz"
|
||||
|
||||
# If a specific version is requested (e.g. V1.0.0), use the tag URL
|
||||
if [[ "$branch" != "main" ]]; then
|
||||
url="https://github.com/tw93/mole/archive/refs/tags/${branch}.tar.gz"
|
||||
fi
|
||||
@@ -131,8 +120,6 @@ resolve_source_dir() {
|
||||
if tar -xzf "$tmp/mole.tar.gz" -C "$tmp" 2> /dev/null; then
|
||||
stop_line_spinner
|
||||
|
||||
# Find the extracted directory (name varies by tag/branch)
|
||||
# It usually looks like Mole-main, mole-main, Mole-1.0.0, etc.
|
||||
local extracted_dir
|
||||
extracted_dir=$(find "$tmp" -mindepth 1 -maxdepth 1 -type d | head -n 1)
|
||||
|
||||
@@ -170,6 +157,7 @@ resolve_source_dir() {
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Version helpers
|
||||
get_source_version() {
|
||||
local source_mole="$SOURCE_DIR/mole"
|
||||
if [[ -f "$source_mole" ]]; then
|
||||
@@ -188,7 +176,6 @@ get_latest_release_tag() {
|
||||
if [[ -z "$tag" ]]; then
|
||||
return 1
|
||||
fi
|
||||
# Return tag as-is; normalize_release_tag will handle standardization
|
||||
printf '%s\n' "$tag"
|
||||
}
|
||||
|
||||
@@ -205,7 +192,6 @@ get_latest_release_tag_from_git() {
|
||||
|
||||
normalize_release_tag() {
|
||||
local tag="$1"
|
||||
# Remove all leading 'v' or 'V' prefixes (handle edge cases like VV1.0.0)
|
||||
while [[ "$tag" =~ ^[vV] ]]; do
|
||||
tag="${tag#v}"
|
||||
tag="${tag#V}"
|
||||
@@ -218,21 +204,18 @@ normalize_release_tag() {
|
||||
get_installed_version() {
|
||||
local binary="$INSTALL_DIR/mole"
|
||||
if [[ -x "$binary" ]]; then
|
||||
# Try running the binary first (preferred method)
|
||||
local version
|
||||
version=$("$binary" --version 2> /dev/null | awk '/Mole version/ {print $NF; exit}')
|
||||
if [[ -n "$version" ]]; then
|
||||
echo "$version"
|
||||
else
|
||||
# Fallback: parse VERSION from file (in case binary is broken)
|
||||
sed -n 's/^VERSION="\(.*\)"$/\1/p' "$binary" | head -n1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
# CLI parsing (supports main/latest and version tokens).
|
||||
parse_args() {
|
||||
# Handle positional version selector in any position
|
||||
local -a args=("$@")
|
||||
local version_token=""
|
||||
local i
|
||||
@@ -248,14 +231,12 @@ parse_args() {
|
||||
fi
|
||||
case "$token" in
|
||||
latest | main)
|
||||
# Install from main branch (edge/beta)
|
||||
export MOLE_VERSION="main"
|
||||
export MOLE_EDGE_INSTALL="true"
|
||||
version_token="$token"
|
||||
unset 'args[$i]'
|
||||
;;
|
||||
[0-9]* | V[0-9]* | v[0-9]*)
|
||||
# Install specific version (e.g., 1.16.0, V1.16.0)
|
||||
export MOLE_VERSION="$token"
|
||||
version_token="$token"
|
||||
unset 'args[$i]'
|
||||
@@ -266,7 +247,6 @@ parse_args() {
|
||||
;;
|
||||
esac
|
||||
done
|
||||
# Use ${args[@]+...} pattern to safely handle sparse/empty arrays with set -u
|
||||
if [[ ${#args[@]} -gt 0 ]]; then
|
||||
set -- ${args[@]+"${args[@]}"}
|
||||
else
|
||||
@@ -311,17 +291,14 @@ parse_args() {
|
||||
done
|
||||
}
|
||||
|
||||
# Check system requirements
|
||||
# Environment checks and directory setup
|
||||
check_requirements() {
|
||||
# Check if running on macOS
|
||||
if [[ "$OSTYPE" != "darwin"* ]]; then
|
||||
log_error "This tool is designed for macOS only"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if already installed via Homebrew
|
||||
if command -v brew > /dev/null 2>&1 && brew list mole > /dev/null 2>&1; then
|
||||
# Verify that mole executable actually exists and is from Homebrew
|
||||
local mole_path
|
||||
mole_path=$(command -v mole 2> /dev/null || true)
|
||||
local is_homebrew_binary=false
|
||||
@@ -332,7 +309,6 @@ check_requirements() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Only block installation if Homebrew binary actually exists
|
||||
if [[ "$is_homebrew_binary" == "true" ]]; then
|
||||
if [[ "$ACTION" == "update" ]]; then
|
||||
return 0
|
||||
@@ -346,27 +322,22 @@ check_requirements() {
|
||||
echo ""
|
||||
exit 1
|
||||
else
|
||||
# Brew has mole in database but binary doesn't exist - clean up
|
||||
log_warning "Cleaning up stale Homebrew installation..."
|
||||
brew uninstall --force mole > /dev/null 2>&1 || true
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check if install directory exists and is writable
|
||||
if [[ ! -d "$(dirname "$INSTALL_DIR")" ]]; then
|
||||
log_error "Parent directory $(dirname "$INSTALL_DIR") does not exist"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Create installation directories
|
||||
create_directories() {
|
||||
# Create install directory if it doesn't exist
|
||||
if [[ ! -d "$INSTALL_DIR" ]]; then
|
||||
maybe_sudo mkdir -p "$INSTALL_DIR"
|
||||
fi
|
||||
|
||||
# Create config directory
|
||||
if ! mkdir -p "$CONFIG_DIR" "$CONFIG_DIR/bin" "$CONFIG_DIR/lib"; then
|
||||
log_error "Failed to create config directory: $CONFIG_DIR"
|
||||
exit 1
|
||||
@@ -374,7 +345,7 @@ create_directories() {
|
||||
|
||||
}
|
||||
|
||||
# Build binary locally from source when download isn't available
|
||||
# Binary install helpers
|
||||
build_binary_from_source() {
|
||||
local binary_name="$1"
|
||||
local target_path="$2"
|
||||
@@ -418,7 +389,6 @@ build_binary_from_source() {
|
||||
return 1
|
||||
}
|
||||
|
||||
# Download binary from release
|
||||
download_binary() {
|
||||
local binary_name="$1"
|
||||
local target_path="$CONFIG_DIR/bin/${binary_name}-go"
|
||||
@@ -429,8 +399,6 @@ download_binary() {
|
||||
arch_suffix="arm64"
|
||||
fi
|
||||
|
||||
# Try to use local binary first (from build or source)
|
||||
# Check for both standard name and cross-compiled name
|
||||
if [[ -f "$SOURCE_DIR/bin/${binary_name}-go" ]]; then
|
||||
cp "$SOURCE_DIR/bin/${binary_name}-go" "$target_path"
|
||||
chmod +x "$target_path"
|
||||
@@ -443,7 +411,6 @@ download_binary() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Fallback to download
|
||||
local version
|
||||
version=$(get_source_version)
|
||||
if [[ -z "$version" ]]; then
|
||||
@@ -455,9 +422,7 @@ download_binary() {
|
||||
fi
|
||||
local url="https://github.com/tw93/mole/releases/download/V${version}/${binary_name}-darwin-${arch_suffix}"
|
||||
|
||||
# Only attempt download if we have internet
|
||||
# Note: Skip network check and let curl download handle connectivity issues
|
||||
# This avoids false negatives from strict 2-second timeout
|
||||
# Skip preflight network checks to avoid false negatives.
|
||||
|
||||
if [[ -t 1 ]]; then
|
||||
start_line_spinner "Downloading ${binary_name}..."
|
||||
@@ -480,7 +445,7 @@ download_binary() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Install files
|
||||
# File installation (bin/lib/scripts + go helpers).
|
||||
install_files() {
|
||||
|
||||
resolve_source_dir
|
||||
@@ -492,7 +457,6 @@ install_files() {
|
||||
install_dir_abs="$(cd "$INSTALL_DIR" && pwd)"
|
||||
config_dir_abs="$(cd "$CONFIG_DIR" && pwd)"
|
||||
|
||||
# Copy main executable when destination differs
|
||||
if [[ -f "$SOURCE_DIR/mole" ]]; then
|
||||
if [[ "$source_dir_abs" != "$install_dir_abs" ]]; then
|
||||
if needs_sudo; then
|
||||
@@ -507,7 +471,6 @@ install_files() {
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Install mo alias for Mole if available
|
||||
if [[ -f "$SOURCE_DIR/mo" ]]; then
|
||||
if [[ "$source_dir_abs" == "$install_dir_abs" ]]; then
|
||||
log_success "mo alias already present"
|
||||
@@ -518,7 +481,6 @@ install_files() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Copy configuration and modules
|
||||
if [[ -d "$SOURCE_DIR/bin" ]]; then
|
||||
local source_bin_abs="$(cd "$SOURCE_DIR/bin" && pwd)"
|
||||
local config_bin_abs="$(cd "$CONFIG_DIR/bin" && pwd)"
|
||||
@@ -550,7 +512,6 @@ install_files() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Copy other files if they exist and directories differ
|
||||
if [[ "$config_dir_abs" != "$source_dir_abs" ]]; then
|
||||
for file in README.md LICENSE install.sh; do
|
||||
if [[ -f "$SOURCE_DIR/$file" ]]; then
|
||||
@@ -563,12 +524,10 @@ install_files() {
|
||||
chmod +x "$CONFIG_DIR/install.sh"
|
||||
fi
|
||||
|
||||
# Update the mole script to use the config directory when installed elsewhere
|
||||
if [[ "$source_dir_abs" != "$install_dir_abs" ]]; then
|
||||
maybe_sudo sed -i '' "s|SCRIPT_DIR=.*|SCRIPT_DIR=\"$CONFIG_DIR\"|" "$INSTALL_DIR/mole"
|
||||
fi
|
||||
|
||||
# Install/Download Go binaries
|
||||
if ! download_binary "analyze"; then
|
||||
exit 1
|
||||
fi
|
||||
@@ -577,12 +536,11 @@ install_files() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify installation
|
||||
# Verification and PATH hint
|
||||
verify_installation() {
|
||||
|
||||
if [[ -x "$INSTALL_DIR/mole" ]] && [[ -f "$CONFIG_DIR/lib/core/common.sh" ]]; then
|
||||
|
||||
# Test if mole command works
|
||||
if "$INSTALL_DIR/mole" --help > /dev/null 2>&1; then
|
||||
return 0
|
||||
else
|
||||
@@ -594,14 +552,11 @@ verify_installation() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Add to PATH if needed
|
||||
setup_path() {
|
||||
# Check if install directory is in PATH
|
||||
if [[ ":$PATH:" == *":$INSTALL_DIR:"* ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
# Only suggest PATH setup for custom directories
|
||||
if [[ "$INSTALL_DIR" != "/usr/local/bin" ]]; then
|
||||
log_warning "$INSTALL_DIR is not in your PATH"
|
||||
echo ""
|
||||
@@ -659,7 +614,7 @@ print_usage_summary() {
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Main installation function
|
||||
# Main install/update flows
|
||||
perform_install() {
|
||||
resolve_source_dir
|
||||
local source_version
|
||||
@@ -678,7 +633,7 @@ perform_install() {
|
||||
installed_version="$source_version"
|
||||
fi
|
||||
|
||||
# Add edge indicator for main branch installs
|
||||
# Edge installs get a suffix to make the version explicit.
|
||||
if [[ "${MOLE_EDGE_INSTALL:-}" == "true" ]]; then
|
||||
installed_version="${installed_version}-edge"
|
||||
echo ""
|
||||
@@ -693,7 +648,6 @@ perform_update() {
|
||||
check_requirements
|
||||
|
||||
if command -v brew > /dev/null 2>&1 && brew list mole > /dev/null 2>&1; then
|
||||
# Try to use shared function if available (when running from installed Mole)
|
||||
resolve_source_dir 2> /dev/null || true
|
||||
local current_version
|
||||
current_version=$(get_installed_version || echo "unknown")
|
||||
@@ -702,7 +656,6 @@ perform_update() {
|
||||
source "$SOURCE_DIR/lib/core/common.sh"
|
||||
update_via_homebrew "$current_version"
|
||||
else
|
||||
# No common.sh available - provide helpful instructions
|
||||
log_error "Cannot update Homebrew-managed Mole without full installation"
|
||||
echo ""
|
||||
echo "Please update via Homebrew:"
|
||||
@@ -735,7 +688,6 @@ perform_update() {
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Update with minimal output (suppress info/success, show errors only)
|
||||
local old_verbose=$VERBOSE
|
||||
VERBOSE=0
|
||||
create_directories || {
|
||||
@@ -766,7 +718,6 @@ perform_update() {
|
||||
echo -e "${GREEN}${ICON_SUCCESS}${NC} Updated to latest version ($updated_version)"
|
||||
}
|
||||
|
||||
# Run requested action
|
||||
parse_args "$@"
|
||||
|
||||
case "$ACTION" in
|
||||
|
||||
@@ -1,22 +1,19 @@
|
||||
#!/bin/bash
|
||||
# User GUI Applications Cleanup Module
|
||||
# Desktop applications, communication tools, media players, games, utilities
|
||||
# User GUI Applications Cleanup Module (desktop apps, media, utilities).
|
||||
set -euo pipefail
|
||||
# Clean Xcode and iOS development tools
|
||||
# Xcode and iOS tooling.
|
||||
clean_xcode_tools() {
|
||||
# Check if Xcode is running for safer cleanup of critical resources
|
||||
# Skip DerivedData/Archives while Xcode is running.
|
||||
local xcode_running=false
|
||||
if pgrep -x "Xcode" > /dev/null 2>&1; then
|
||||
xcode_running=true
|
||||
fi
|
||||
# Safe to clean regardless of Xcode state
|
||||
safe_clean ~/Library/Developer/CoreSimulator/Caches/* "Simulator cache"
|
||||
safe_clean ~/Library/Developer/CoreSimulator/Devices/*/data/tmp/* "Simulator temp files"
|
||||
safe_clean ~/Library/Caches/com.apple.dt.Xcode/* "Xcode cache"
|
||||
safe_clean ~/Library/Developer/Xcode/iOS\ Device\ Logs/* "iOS device logs"
|
||||
safe_clean ~/Library/Developer/Xcode/watchOS\ Device\ Logs/* "watchOS device logs"
|
||||
safe_clean ~/Library/Developer/Xcode/Products/* "Xcode build products"
|
||||
# Clean build artifacts only if Xcode is not running
|
||||
if [[ "$xcode_running" == "false" ]]; then
|
||||
safe_clean ~/Library/Developer/Xcode/DerivedData/* "Xcode derived data"
|
||||
safe_clean ~/Library/Developer/Xcode/Archives/* "Xcode archives"
|
||||
@@ -24,7 +21,7 @@ clean_xcode_tools() {
|
||||
echo -e " ${YELLOW}${ICON_WARNING}${NC} Xcode is running, skipping DerivedData and Archives cleanup"
|
||||
fi
|
||||
}
|
||||
# Clean code editors (VS Code, Sublime, etc.)
|
||||
# Code editors.
|
||||
clean_code_editors() {
|
||||
safe_clean ~/Library/Application\ Support/Code/logs/* "VS Code logs"
|
||||
safe_clean ~/Library/Application\ Support/Code/Cache/* "VS Code cache"
|
||||
@@ -32,7 +29,7 @@ clean_code_editors() {
|
||||
safe_clean ~/Library/Application\ Support/Code/CachedData/* "VS Code data cache"
|
||||
safe_clean ~/Library/Caches/com.sublimetext.*/* "Sublime Text cache"
|
||||
}
|
||||
# Clean communication apps (Slack, Discord, Zoom, etc.)
|
||||
# Communication apps.
|
||||
clean_communication_apps() {
|
||||
safe_clean ~/Library/Application\ Support/discord/Cache/* "Discord cache"
|
||||
safe_clean ~/Library/Application\ Support/legcord/Cache/* "Legcord cache"
|
||||
@@ -47,43 +44,43 @@ clean_communication_apps() {
|
||||
safe_clean ~/Library/Caches/com.tencent.WeWorkMac/* "WeCom cache"
|
||||
safe_clean ~/Library/Caches/com.feishu.*/* "Feishu cache"
|
||||
}
|
||||
# Clean DingTalk
|
||||
# DingTalk.
|
||||
clean_dingtalk() {
|
||||
safe_clean ~/Library/Caches/dd.work.exclusive4aliding/* "DingTalk iDingTalk cache"
|
||||
safe_clean ~/Library/Caches/com.alibaba.AliLang.osx/* "AliLang security component"
|
||||
safe_clean ~/Library/Application\ Support/iDingTalk/log/* "DingTalk logs"
|
||||
safe_clean ~/Library/Application\ Support/iDingTalk/holmeslogs/* "DingTalk holmes logs"
|
||||
}
|
||||
# Clean AI assistants
|
||||
# AI assistants.
|
||||
clean_ai_apps() {
|
||||
safe_clean ~/Library/Caches/com.openai.chat/* "ChatGPT cache"
|
||||
safe_clean ~/Library/Caches/com.anthropic.claudefordesktop/* "Claude desktop cache"
|
||||
safe_clean ~/Library/Logs/Claude/* "Claude logs"
|
||||
}
|
||||
# Clean design and creative tools
|
||||
# Design and creative tools.
|
||||
clean_design_tools() {
|
||||
safe_clean ~/Library/Caches/com.bohemiancoding.sketch3/* "Sketch cache"
|
||||
safe_clean ~/Library/Application\ Support/com.bohemiancoding.sketch3/cache/* "Sketch app cache"
|
||||
safe_clean ~/Library/Caches/Adobe/* "Adobe cache"
|
||||
safe_clean ~/Library/Caches/com.adobe.*/* "Adobe app caches"
|
||||
safe_clean ~/Library/Caches/com.figma.Desktop/* "Figma cache"
|
||||
# Note: Raycast cache is protected - contains clipboard history (including images)
|
||||
# Raycast cache is protected (clipboard history, images).
|
||||
}
|
||||
# Clean video editing tools
|
||||
# Video editing tools.
|
||||
clean_video_tools() {
|
||||
safe_clean ~/Library/Caches/net.telestream.screenflow10/* "ScreenFlow cache"
|
||||
safe_clean ~/Library/Caches/com.apple.FinalCut/* "Final Cut Pro cache"
|
||||
safe_clean ~/Library/Caches/com.blackmagic-design.DaVinciResolve/* "DaVinci Resolve cache"
|
||||
safe_clean ~/Library/Caches/com.adobe.PremierePro.*/* "Premiere Pro cache"
|
||||
}
|
||||
# Clean 3D and CAD tools
|
||||
# 3D and CAD tools.
|
||||
clean_3d_tools() {
|
||||
safe_clean ~/Library/Caches/org.blenderfoundation.blender/* "Blender cache"
|
||||
safe_clean ~/Library/Caches/com.maxon.cinema4d/* "Cinema 4D cache"
|
||||
safe_clean ~/Library/Caches/com.autodesk.*/* "Autodesk cache"
|
||||
safe_clean ~/Library/Caches/com.sketchup.*/* "SketchUp cache"
|
||||
}
|
||||
# Clean productivity apps
|
||||
# Productivity apps.
|
||||
clean_productivity_apps() {
|
||||
safe_clean ~/Library/Caches/com.tw93.MiaoYan/* "MiaoYan cache"
|
||||
safe_clean ~/Library/Caches/com.klee.desktop/* "Klee cache"
|
||||
@@ -92,20 +89,18 @@ clean_productivity_apps() {
|
||||
safe_clean ~/Library/Caches/com.filo.client/* "Filo cache"
|
||||
safe_clean ~/Library/Caches/com.flomoapp.mac/* "Flomo cache"
|
||||
}
|
||||
# Clean music and media players (protects Spotify offline music)
|
||||
# Music/media players (protect Spotify offline music).
|
||||
clean_media_players() {
|
||||
# Spotify cache protection: check for offline music indicators
|
||||
local spotify_cache="$HOME/Library/Caches/com.spotify.client"
|
||||
local spotify_data="$HOME/Library/Application Support/Spotify"
|
||||
local has_offline_music=false
|
||||
# Check for offline music database or large cache (>500MB)
|
||||
# Heuristics: offline DB or large cache.
|
||||
if [[ -f "$spotify_data/PersistentCache/Storage/offline.bnk" ]] ||
|
||||
[[ -d "$spotify_data/PersistentCache/Storage" && -n "$(find "$spotify_data/PersistentCache/Storage" -type f -name "*.file" 2> /dev/null | head -1)" ]]; then
|
||||
has_offline_music=true
|
||||
elif [[ -d "$spotify_cache" ]]; then
|
||||
local cache_size_kb
|
||||
cache_size_kb=$(get_path_size_kb "$spotify_cache")
|
||||
# Large cache (>500MB) likely contains offline music
|
||||
if [[ $cache_size_kb -ge 512000 ]]; then
|
||||
has_offline_music=true
|
||||
fi
|
||||
@@ -125,7 +120,7 @@ clean_media_players() {
|
||||
safe_clean ~/Library/Caches/com.kugou.mac/* "Kugou Music cache"
|
||||
safe_clean ~/Library/Caches/com.kuwo.mac/* "Kuwo Music cache"
|
||||
}
|
||||
# Clean video players
|
||||
# Video players.
|
||||
clean_video_players() {
|
||||
safe_clean ~/Library/Caches/com.colliderli.iina "IINA cache"
|
||||
safe_clean ~/Library/Caches/org.videolan.vlc "VLC cache"
|
||||
@@ -136,7 +131,7 @@ clean_video_players() {
|
||||
safe_clean ~/Library/Caches/com.douyu.*/* "Douyu cache"
|
||||
safe_clean ~/Library/Caches/com.huya.*/* "Huya cache"
|
||||
}
|
||||
# Clean download managers
|
||||
# Download managers.
|
||||
clean_download_managers() {
|
||||
safe_clean ~/Library/Caches/net.xmac.aria2gui "Aria2 cache"
|
||||
safe_clean ~/Library/Caches/org.m0k.transmission "Transmission cache"
|
||||
@@ -145,7 +140,7 @@ clean_download_managers() {
|
||||
safe_clean ~/Library/Caches/com.folx.*/* "Folx cache"
|
||||
safe_clean ~/Library/Caches/com.charlessoft.pacifist/* "Pacifist cache"
|
||||
}
|
||||
# Clean gaming platforms
|
||||
# Gaming platforms.
|
||||
clean_gaming_platforms() {
|
||||
safe_clean ~/Library/Caches/com.valvesoftware.steam/* "Steam cache"
|
||||
safe_clean ~/Library/Application\ Support/Steam/htmlcache/* "Steam web cache"
|
||||
@@ -156,41 +151,41 @@ clean_gaming_platforms() {
|
||||
safe_clean ~/Library/Caches/com.gog.galaxy/* "GOG Galaxy cache"
|
||||
safe_clean ~/Library/Caches/com.riotgames.*/* "Riot Games cache"
|
||||
}
|
||||
# Clean translation and dictionary apps
|
||||
# Translation/dictionary apps.
|
||||
clean_translation_apps() {
|
||||
safe_clean ~/Library/Caches/com.youdao.YoudaoDict "Youdao Dictionary cache"
|
||||
safe_clean ~/Library/Caches/com.eudic.* "Eudict cache"
|
||||
safe_clean ~/Library/Caches/com.bob-build.Bob "Bob Translation cache"
|
||||
}
|
||||
# Clean screenshot and screen recording tools
|
||||
# Screenshot/recording tools.
|
||||
clean_screenshot_tools() {
|
||||
safe_clean ~/Library/Caches/com.cleanshot.* "CleanShot cache"
|
||||
safe_clean ~/Library/Caches/com.reincubate.camo "Camo cache"
|
||||
safe_clean ~/Library/Caches/com.xnipapp.xnip "Xnip cache"
|
||||
}
|
||||
# Clean email clients
|
||||
# Email clients.
|
||||
clean_email_clients() {
|
||||
safe_clean ~/Library/Caches/com.readdle.smartemail-Mac "Spark cache"
|
||||
safe_clean ~/Library/Caches/com.airmail.* "Airmail cache"
|
||||
}
|
||||
# Clean task management apps
|
||||
# Task management apps.
|
||||
clean_task_apps() {
|
||||
safe_clean ~/Library/Caches/com.todoist.mac.Todoist "Todoist cache"
|
||||
safe_clean ~/Library/Caches/com.any.do.* "Any.do cache"
|
||||
}
|
||||
# Clean shell and terminal utilities
|
||||
# Shell/terminal utilities.
|
||||
clean_shell_utils() {
|
||||
safe_clean ~/.zcompdump* "Zsh completion cache"
|
||||
safe_clean ~/.lesshst "less history"
|
||||
safe_clean ~/.viminfo.tmp "Vim temporary files"
|
||||
safe_clean ~/.wget-hsts "wget HSTS cache"
|
||||
}
|
||||
# Clean input method and system utilities
|
||||
# Input methods and system utilities.
|
||||
clean_system_utils() {
|
||||
safe_clean ~/Library/Caches/com.runjuu.Input-Source-Pro/* "Input Source Pro cache"
|
||||
safe_clean ~/Library/Caches/macos-wakatime.WakaTime/* "WakaTime cache"
|
||||
}
|
||||
# Clean note-taking apps
|
||||
# Note-taking apps.
|
||||
clean_note_apps() {
|
||||
safe_clean ~/Library/Caches/notion.id/* "Notion cache"
|
||||
safe_clean ~/Library/Caches/md.obsidian/* "Obsidian cache"
|
||||
@@ -199,19 +194,19 @@ clean_note_apps() {
|
||||
safe_clean ~/Library/Caches/com.evernote.*/* "Evernote cache"
|
||||
safe_clean ~/Library/Caches/com.yinxiang.*/* "Yinxiang Note cache"
|
||||
}
|
||||
# Clean launcher and automation tools
|
||||
# Launchers and automation tools.
|
||||
clean_launcher_apps() {
|
||||
safe_clean ~/Library/Caches/com.runningwithcrayons.Alfred/* "Alfred cache"
|
||||
safe_clean ~/Library/Caches/cx.c3.theunarchiver/* "The Unarchiver cache"
|
||||
}
|
||||
# Clean remote desktop tools
|
||||
# Remote desktop tools.
|
||||
clean_remote_desktop() {
|
||||
safe_clean ~/Library/Caches/com.teamviewer.*/* "TeamViewer cache"
|
||||
safe_clean ~/Library/Caches/com.anydesk.*/* "AnyDesk cache"
|
||||
safe_clean ~/Library/Caches/com.todesk.*/* "ToDesk cache"
|
||||
safe_clean ~/Library/Caches/com.sunlogin.*/* "Sunlogin cache"
|
||||
}
|
||||
# Main function to clean all user GUI applications
|
||||
# Main entry for GUI app cleanup.
|
||||
clean_user_gui_applications() {
|
||||
stop_section_spinner
|
||||
clean_xcode_tools
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
# Application Data Cleanup Module
|
||||
set -euo pipefail
|
||||
# Args: $1=target_dir, $2=label
|
||||
# Clean .DS_Store (Finder metadata), home uses maxdepth 5, excludes slow paths, max 500 files
|
||||
clean_ds_store_tree() {
|
||||
local target="$1"
|
||||
local label="$2"
|
||||
@@ -15,7 +14,6 @@ clean_ds_store_tree() {
|
||||
start_inline_spinner "Cleaning Finder metadata..."
|
||||
spinner_active="true"
|
||||
fi
|
||||
# Build exclusion paths for find (skip common slow/large directories)
|
||||
local -a exclude_paths=(
|
||||
-path "*/Library/Application Support/MobileSync" -prune -o
|
||||
-path "*/Library/Developer" -prune -o
|
||||
@@ -24,13 +22,11 @@ clean_ds_store_tree() {
|
||||
-path "*/.git" -prune -o
|
||||
-path "*/Library/Caches" -prune -o
|
||||
)
|
||||
# Build find command to avoid unbound array expansion with set -u
|
||||
local -a find_cmd=("command" "find" "$target")
|
||||
if [[ "$target" == "$HOME" ]]; then
|
||||
find_cmd+=("-maxdepth" "5")
|
||||
fi
|
||||
find_cmd+=("${exclude_paths[@]}" "-type" "f" "-name" ".DS_Store" "-print0")
|
||||
# Find .DS_Store files with exclusions and depth limit
|
||||
while IFS= read -r -d '' ds_file; do
|
||||
local size
|
||||
size=$(get_file_size "$ds_file")
|
||||
@@ -61,14 +57,11 @@ clean_ds_store_tree() {
|
||||
note_activity
|
||||
fi
|
||||
}
|
||||
# Clean data for uninstalled apps (caches/logs/states older than 60 days)
|
||||
# Protects system apps, major vendors, scans /Applications+running processes
|
||||
# Max 100 items/pattern, 2s du timeout. Env: ORPHAN_AGE_THRESHOLD, DRY_RUN
|
||||
# Orphaned app data (60+ days inactive). Env: ORPHAN_AGE_THRESHOLD, DRY_RUN
|
||||
# Usage: scan_installed_apps "output_file"
|
||||
# Scan system for installed application bundle IDs
|
||||
scan_installed_apps() {
|
||||
local installed_bundles="$1"
|
||||
# Performance optimization: cache results for 5 minutes
|
||||
# Cache installed app scan briefly to speed repeated runs.
|
||||
local cache_file="$HOME/.cache/mole/installed_apps_cache"
|
||||
local cache_age_seconds=300 # 5 minutes
|
||||
if [[ -f "$cache_file" ]]; then
|
||||
@@ -77,7 +70,6 @@ scan_installed_apps() {
|
||||
local age=$((current_time - cache_mtime))
|
||||
if [[ $age -lt $cache_age_seconds ]]; then
|
||||
debug_log "Using cached app list (age: ${age}s)"
|
||||
# Verify cache file is readable and not empty
|
||||
if [[ -r "$cache_file" ]] && [[ -s "$cache_file" ]]; then
|
||||
if cat "$cache_file" > "$installed_bundles" 2> /dev/null; then
|
||||
return 0
|
||||
@@ -90,26 +82,22 @@ scan_installed_apps() {
|
||||
fi
|
||||
fi
|
||||
debug_log "Scanning installed applications (cache expired or missing)"
|
||||
# Scan all Applications directories
|
||||
local -a app_dirs=(
|
||||
"/Applications"
|
||||
"/System/Applications"
|
||||
"$HOME/Applications"
|
||||
)
|
||||
# Create a temp dir for parallel results to avoid write contention
|
||||
# Temp dir avoids write contention across parallel scans.
|
||||
local scan_tmp_dir=$(create_temp_dir)
|
||||
# Parallel scan for applications
|
||||
local pids=()
|
||||
local dir_idx=0
|
||||
for app_dir in "${app_dirs[@]}"; do
|
||||
[[ -d "$app_dir" ]] || continue
|
||||
(
|
||||
# Quickly find all .app bundles first
|
||||
local -a app_paths=()
|
||||
while IFS= read -r app_path; do
|
||||
[[ -n "$app_path" ]] && app_paths+=("$app_path")
|
||||
done < <(find "$app_dir" -name '*.app' -maxdepth 3 -type d 2> /dev/null)
|
||||
# Read bundle IDs with PlistBuddy
|
||||
local count=0
|
||||
for app_path in "${app_paths[@]:-}"; do
|
||||
local plist_path="$app_path/Contents/Info.plist"
|
||||
@@ -124,7 +112,7 @@ scan_installed_apps() {
|
||||
pids+=($!)
|
||||
((dir_idx++))
|
||||
done
|
||||
# Get running applications and LaunchAgents in parallel
|
||||
# Collect running apps and LaunchAgents to avoid false orphan cleanup.
|
||||
(
|
||||
local running_apps=$(run_with_timeout 5 osascript -e 'tell application "System Events" to get bundle identifier of every application process' 2> /dev/null || echo "")
|
||||
echo "$running_apps" | tr ',' '\n' | sed -e 's/^ *//;s/ *$//' -e '/^$/d' > "$scan_tmp_dir/running.txt"
|
||||
@@ -136,7 +124,6 @@ scan_installed_apps() {
|
||||
xargs -I {} basename {} .plist > "$scan_tmp_dir/agents.txt" 2> /dev/null || true
|
||||
) &
|
||||
pids+=($!)
|
||||
# Wait for all background scans to complete
|
||||
debug_log "Waiting for ${#pids[@]} background processes: ${pids[*]}"
|
||||
for pid in "${pids[@]}"; do
|
||||
wait "$pid" 2> /dev/null || true
|
||||
@@ -145,37 +132,30 @@ scan_installed_apps() {
|
||||
cat "$scan_tmp_dir"/*.txt >> "$installed_bundles" 2> /dev/null || true
|
||||
safe_remove "$scan_tmp_dir" true
|
||||
sort -u "$installed_bundles" -o "$installed_bundles"
|
||||
# Cache the results
|
||||
ensure_user_dir "$(dirname "$cache_file")"
|
||||
cp "$installed_bundles" "$cache_file" 2> /dev/null || true
|
||||
local app_count=$(wc -l < "$installed_bundles" 2> /dev/null | tr -d ' ')
|
||||
debug_log "Scanned $app_count unique applications"
|
||||
}
|
||||
# Usage: is_bundle_orphaned "bundle_id" "directory_path" "installed_bundles_file"
|
||||
# Check if bundle is orphaned
|
||||
is_bundle_orphaned() {
|
||||
local bundle_id="$1"
|
||||
local directory_path="$2"
|
||||
local installed_bundles="$3"
|
||||
# Skip system-critical and protected apps
|
||||
if should_protect_data "$bundle_id"; then
|
||||
return 1
|
||||
fi
|
||||
# Check if app exists in our scan
|
||||
if grep -Fxq "$bundle_id" "$installed_bundles" 2> /dev/null; then
|
||||
return 1
|
||||
fi
|
||||
# Check against centralized protected patterns (app_protection.sh)
|
||||
if should_protect_data "$bundle_id"; then
|
||||
return 1
|
||||
fi
|
||||
# Extra check for specific system bundles not covered by patterns
|
||||
case "$bundle_id" in
|
||||
loginwindow | dock | systempreferences | systemsettings | settings | controlcenter | finder | safari)
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
# Check file age - only clean if 60+ days inactive
|
||||
if [[ -e "$directory_path" ]]; then
|
||||
local last_modified_epoch=$(get_file_mtime "$directory_path")
|
||||
local current_epoch=$(date +%s)
|
||||
@@ -186,31 +166,23 @@ is_bundle_orphaned() {
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
# Clean data for uninstalled apps (caches/logs/states older than 60 days)
|
||||
# Max 100 items/pattern, 2s du timeout. Env: ORPHAN_AGE_THRESHOLD, DRY_RUN
|
||||
# Protects system apps, major vendors, scans /Applications+running processes
|
||||
# Orphaned app data sweep.
|
||||
clean_orphaned_app_data() {
|
||||
# Quick permission check - if we can't access Library folders, skip
|
||||
if ! ls "$HOME/Library/Caches" > /dev/null 2>&1; then
|
||||
stop_section_spinner
|
||||
echo -e " ${YELLOW}${ICON_WARNING}${NC} Skipped: No permission to access Library folders"
|
||||
return 0
|
||||
fi
|
||||
# Build list of installed/active apps
|
||||
start_section_spinner "Scanning installed apps..."
|
||||
local installed_bundles=$(create_temp_file)
|
||||
scan_installed_apps "$installed_bundles"
|
||||
stop_section_spinner
|
||||
# Display scan results
|
||||
local app_count=$(wc -l < "$installed_bundles" 2> /dev/null | tr -d ' ')
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Found $app_count active/installed apps"
|
||||
# Track statistics
|
||||
local orphaned_count=0
|
||||
local total_orphaned_kb=0
|
||||
# Unified orphaned resource scanner (caches, logs, states, webkit, HTTP, cookies)
|
||||
start_section_spinner "Scanning orphaned app resources..."
|
||||
# Define resource types to scan
|
||||
# CRITICAL: NEVER add LaunchAgents or LaunchDaemons (breaks login items/startup apps)
|
||||
# CRITICAL: NEVER add LaunchAgents or LaunchDaemons (breaks login items/startup apps).
|
||||
local -a resource_types=(
|
||||
"$HOME/Library/Caches|Caches|com.*:org.*:net.*:io.*"
|
||||
"$HOME/Library/Logs|Logs|com.*:org.*:net.*:io.*"
|
||||
@@ -222,38 +194,29 @@ clean_orphaned_app_data() {
|
||||
orphaned_count=0
|
||||
for resource_type in "${resource_types[@]}"; do
|
||||
IFS='|' read -r base_path label patterns <<< "$resource_type"
|
||||
# Check both existence and permission to avoid hanging
|
||||
if [[ ! -d "$base_path" ]]; then
|
||||
continue
|
||||
fi
|
||||
# Quick permission check - if we can't ls the directory, skip it
|
||||
if ! ls "$base_path" > /dev/null 2>&1; then
|
||||
continue
|
||||
fi
|
||||
# Build file pattern array
|
||||
local -a file_patterns=()
|
||||
IFS=':' read -ra pattern_arr <<< "$patterns"
|
||||
for pat in "${pattern_arr[@]}"; do
|
||||
file_patterns+=("$base_path/$pat")
|
||||
done
|
||||
# Scan and clean orphaned items
|
||||
for item_path in "${file_patterns[@]}"; do
|
||||
# Use shell glob (no ls needed)
|
||||
# Limit iterations to prevent hanging on directories with too many files
|
||||
local iteration_count=0
|
||||
for match in $item_path; do
|
||||
[[ -e "$match" ]] || continue
|
||||
# Safety: limit iterations to prevent infinite loops on massive directories
|
||||
((iteration_count++))
|
||||
if [[ $iteration_count -gt $MOLE_MAX_ORPHAN_ITERATIONS ]]; then
|
||||
break
|
||||
fi
|
||||
# Extract bundle ID from filename
|
||||
local bundle_id=$(basename "$match")
|
||||
bundle_id="${bundle_id%.savedState}"
|
||||
bundle_id="${bundle_id%.binarycookies}"
|
||||
if is_bundle_orphaned "$bundle_id" "$match" "$installed_bundles"; then
|
||||
# Use timeout to prevent du from hanging on network mounts or problematic paths
|
||||
local size_kb
|
||||
size_kb=$(get_path_size_kb "$match")
|
||||
if [[ -z "$size_kb" || "$size_kb" == "0" ]]; then
|
||||
|
||||
@@ -4,13 +4,11 @@
|
||||
# Skips if run within 7 days, runs cleanup/autoremove in parallel with 120s timeout
|
||||
clean_homebrew() {
|
||||
command -v brew > /dev/null 2>&1 || return 0
|
||||
# Dry run mode - just indicate what would happen
|
||||
if [[ "${DRY_RUN:-false}" == "true" ]]; then
|
||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} Homebrew · would cleanup and autoremove"
|
||||
return 0
|
||||
fi
|
||||
# Smart caching: check if brew cleanup was run recently (within 7 days)
|
||||
# Extended from 2 days to 7 days to reduce cleanup frequency
|
||||
# Skip if cleaned recently to avoid repeated heavy operations.
|
||||
local brew_cache_file="${HOME}/.cache/mole/brew_last_cleanup"
|
||||
local cache_valid_days=7
|
||||
local should_skip=false
|
||||
@@ -27,20 +25,17 @@ clean_homebrew() {
|
||||
fi
|
||||
fi
|
||||
[[ "$should_skip" == "true" ]] && return 0
|
||||
# Quick pre-check: determine if cleanup is needed based on cache size (<50MB)
|
||||
# Use timeout to prevent slow du on very large caches
|
||||
# If timeout occurs, assume cache is large and run cleanup
|
||||
# Skip cleanup if cache is small; still run autoremove.
|
||||
local skip_cleanup=false
|
||||
local brew_cache_size=0
|
||||
if [[ -d ~/Library/Caches/Homebrew ]]; then
|
||||
brew_cache_size=$(run_with_timeout 3 du -sk ~/Library/Caches/Homebrew 2> /dev/null | awk '{print $1}')
|
||||
local du_exit=$?
|
||||
# Skip cleanup (but still run autoremove) if cache is small
|
||||
if [[ $du_exit -eq 0 && -n "$brew_cache_size" && "$brew_cache_size" -lt 51200 ]]; then
|
||||
skip_cleanup=true
|
||||
fi
|
||||
fi
|
||||
# Display appropriate spinner message
|
||||
# Spinner reflects whether cleanup is skipped.
|
||||
if [[ -t 1 ]]; then
|
||||
if [[ "$skip_cleanup" == "true" ]]; then
|
||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Homebrew autoremove (cleanup skipped)..."
|
||||
@@ -48,8 +43,8 @@ clean_homebrew() {
|
||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Homebrew cleanup and autoremove..."
|
||||
fi
|
||||
fi
|
||||
# Run cleanup/autoremove in parallel with a timeout guard.
|
||||
local timeout_seconds=${MO_BREW_TIMEOUT:-120}
|
||||
# Run brew cleanup and/or autoremove based on cache size
|
||||
local brew_tmp_file autoremove_tmp_file
|
||||
local brew_pid autoremove_pid
|
||||
if [[ "$skip_cleanup" == "false" ]]; then
|
||||
@@ -63,9 +58,7 @@ clean_homebrew() {
|
||||
local elapsed=0
|
||||
local brew_done=false
|
||||
local autoremove_done=false
|
||||
# Mark cleanup as done if it was skipped
|
||||
[[ "$skip_cleanup" == "true" ]] && brew_done=true
|
||||
# Wait for both to complete or timeout
|
||||
while [[ "$brew_done" == "false" ]] || [[ "$autoremove_done" == "false" ]]; do
|
||||
if [[ $elapsed -ge $timeout_seconds ]]; then
|
||||
[[ -n "$brew_pid" ]] && kill -TERM $brew_pid 2> /dev/null || true
|
||||
@@ -77,7 +70,6 @@ clean_homebrew() {
|
||||
sleep 1
|
||||
((elapsed++))
|
||||
done
|
||||
# Wait for processes to finish
|
||||
local brew_success=false
|
||||
if [[ "$skip_cleanup" == "false" && -n "$brew_pid" ]]; then
|
||||
if wait $brew_pid 2> /dev/null; then
|
||||
@@ -90,6 +82,7 @@ clean_homebrew() {
|
||||
fi
|
||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||
# Process cleanup output and extract metrics
|
||||
# Summarize cleanup results.
|
||||
if [[ "$skip_cleanup" == "true" ]]; then
|
||||
# Cleanup was skipped due to small cache size
|
||||
local size_mb=$((brew_cache_size / 1024))
|
||||
@@ -111,6 +104,7 @@ clean_homebrew() {
|
||||
echo -e " ${YELLOW}${ICON_WARNING}${NC} Homebrew cleanup timed out · run ${GRAY}brew cleanup${NC} manually"
|
||||
fi
|
||||
# Process autoremove output - only show if packages were removed
|
||||
# Only surface autoremove output when packages were removed.
|
||||
if [[ "$autoremove_success" == "true" && -f "$autoremove_tmp_file" ]]; then
|
||||
local autoremove_output
|
||||
autoremove_output=$(cat "$autoremove_tmp_file" 2> /dev/null || echo "")
|
||||
@@ -124,6 +118,7 @@ clean_homebrew() {
|
||||
fi
|
||||
# Update cache timestamp on successful completion or when cleanup was intelligently skipped
|
||||
# This prevents repeated cache size checks within the 7-day window
|
||||
# Update cache timestamp when any work succeeded or was intentionally skipped.
|
||||
if [[ "$skip_cleanup" == "true" ]] || [[ "$brew_success" == "true" ]] || [[ "$autoremove_success" == "true" ]]; then
|
||||
ensure_user_file "$brew_cache_file"
|
||||
date +%s > "$brew_cache_file"
|
||||
|
||||
@@ -1,15 +1,11 @@
|
||||
#!/bin/bash
|
||||
# Cache Cleanup Module
|
||||
set -euo pipefail
|
||||
# Only runs once (uses ~/.cache/mole/permissions_granted flag)
|
||||
# Trigger all TCC permission dialogs upfront to avoid random interruptions
|
||||
# Preflight TCC prompts once to avoid mid-run interruptions.
|
||||
check_tcc_permissions() {
|
||||
# Only check in interactive mode
|
||||
[[ -t 1 ]] || return 0
|
||||
local permission_flag="$HOME/.cache/mole/permissions_granted"
|
||||
# Skip if permissions were already granted
|
||||
[[ -f "$permission_flag" ]] && return 0
|
||||
# Key protected directories that require TCC approval
|
||||
local -a tcc_dirs=(
|
||||
"$HOME/Library/Caches"
|
||||
"$HOME/Library/Logs"
|
||||
@@ -17,8 +13,7 @@ check_tcc_permissions() {
|
||||
"$HOME/Library/Containers"
|
||||
"$HOME/.cache"
|
||||
)
|
||||
# Quick permission test - if first directory is accessible, likely others are too
|
||||
# Use simple ls test instead of find to avoid triggering permission dialogs prematurely
|
||||
# Quick permission probe (avoid deep scans).
|
||||
local needs_permission_check=false
|
||||
if ! ls "$HOME/Library/Caches" > /dev/null 2>&1; then
|
||||
needs_permission_check=true
|
||||
@@ -32,35 +27,30 @@ check_tcc_permissions() {
|
||||
echo -ne "${PURPLE}${ICON_ARROW}${NC} Press ${GREEN}Enter${NC} to continue: "
|
||||
read -r
|
||||
MOLE_SPINNER_PREFIX="" start_inline_spinner "Requesting permissions..."
|
||||
# Trigger all TCC prompts upfront by accessing each directory
|
||||
# Using find -maxdepth 1 ensures we touch the directory without deep scanning
|
||||
# Touch each directory to trigger prompts without deep scanning.
|
||||
for dir in "${tcc_dirs[@]}"; do
|
||||
[[ -d "$dir" ]] && command find "$dir" -maxdepth 1 -type d > /dev/null 2>&1
|
||||
done
|
||||
stop_inline_spinner
|
||||
echo ""
|
||||
fi
|
||||
# Mark permissions as granted (won't prompt again)
|
||||
# Mark as granted to avoid repeat prompts.
|
||||
ensure_user_file "$permission_flag"
|
||||
return 0
|
||||
}
|
||||
# Args: $1=browser_name, $2=cache_path
|
||||
# Clean browser Service Worker cache, protecting web editing tools (capcut, photopea, pixlr)
|
||||
# Clean Service Worker cache while protecting critical web editors.
|
||||
clean_service_worker_cache() {
|
||||
local browser_name="$1"
|
||||
local cache_path="$2"
|
||||
[[ ! -d "$cache_path" ]] && return 0
|
||||
local cleaned_size=0
|
||||
local protected_count=0
|
||||
# Find all cache directories and calculate sizes with timeout protection
|
||||
while IFS= read -r cache_dir; do
|
||||
[[ ! -d "$cache_dir" ]] && continue
|
||||
# Extract domain from path using regex
|
||||
# Pattern matches: letters/numbers, hyphens, then dot, then TLD
|
||||
# Example: "abc123_https_example.com_0" → "example.com"
|
||||
# Extract a best-effort domain name from cache folder.
|
||||
local domain=$(basename "$cache_dir" | grep -oE '[a-zA-Z0-9][-a-zA-Z0-9]*\.[a-zA-Z]{2,}' | head -1 || echo "")
|
||||
local size=$(run_with_timeout 5 get_path_size_kb "$cache_dir")
|
||||
# Check if domain is protected
|
||||
local is_protected=false
|
||||
for protected_domain in "${PROTECTED_SW_DOMAINS[@]}"; do
|
||||
if [[ "$domain" == *"$protected_domain"* ]]; then
|
||||
@@ -69,7 +59,6 @@ clean_service_worker_cache() {
|
||||
break
|
||||
fi
|
||||
done
|
||||
# Clean if not protected
|
||||
if [[ "$is_protected" == "false" ]]; then
|
||||
if [[ "$DRY_RUN" != "true" ]]; then
|
||||
safe_remove "$cache_dir" true || true
|
||||
@@ -78,7 +67,6 @@ clean_service_worker_cache() {
|
||||
fi
|
||||
done < <(run_with_timeout 10 sh -c "find '$cache_path' -type d -depth 2 2> /dev/null || true")
|
||||
if [[ $cleaned_size -gt 0 ]]; then
|
||||
# Temporarily stop spinner for clean output
|
||||
local spinner_was_running=false
|
||||
if [[ -t 1 && -n "${INLINE_SPINNER_PID:-}" ]]; then
|
||||
stop_inline_spinner
|
||||
@@ -95,17 +83,15 @@ clean_service_worker_cache() {
|
||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} $browser_name Service Worker (would clean ${cleaned_mb}MB, ${protected_count} protected)"
|
||||
fi
|
||||
note_activity
|
||||
# Restart spinner if it was running
|
||||
if [[ "$spinner_was_running" == "true" ]]; then
|
||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning browser Service Worker caches..."
|
||||
fi
|
||||
fi
|
||||
}
|
||||
# Uses maxdepth 3, excludes Library/.Trash/node_modules, 10s timeout per scan
|
||||
# Clean Next.js (.next/cache) and Python (__pycache__) build caches
|
||||
# Next.js/Python project caches with tight scan bounds and timeouts.
|
||||
clean_project_caches() {
|
||||
stop_inline_spinner 2> /dev/null || true
|
||||
# Quick check: skip if user likely doesn't have development projects
|
||||
# Fast pre-check before scanning the whole home dir.
|
||||
local has_dev_projects=false
|
||||
local -a common_dev_dirs=(
|
||||
"$HOME/Code"
|
||||
@@ -133,8 +119,7 @@ clean_project_caches() {
|
||||
break
|
||||
fi
|
||||
done
|
||||
# If no common dev directories found, perform feature-based detection
|
||||
# Check for project markers in $HOME (node_modules, .git, target, etc.)
|
||||
# Fallback: look for project markers near $HOME.
|
||||
if [[ "$has_dev_projects" == "false" ]]; then
|
||||
local -a project_markers=(
|
||||
"node_modules"
|
||||
@@ -153,7 +138,6 @@ clean_project_caches() {
|
||||
spinner_active=true
|
||||
fi
|
||||
for marker in "${project_markers[@]}"; do
|
||||
# Quick check with maxdepth 2 and 3s timeout to avoid slow scans
|
||||
if run_with_timeout 3 sh -c "find '$HOME' -maxdepth 2 -name '$marker' -not -path '*/Library/*' -not -path '*/.Trash/*' 2>/dev/null | head -1" | grep -q .; then
|
||||
has_dev_projects=true
|
||||
break
|
||||
@@ -162,7 +146,6 @@ clean_project_caches() {
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
stop_inline_spinner 2> /dev/null || true
|
||||
fi
|
||||
# If still no dev projects found, skip scanning
|
||||
[[ "$has_dev_projects" == "false" ]] && return 0
|
||||
fi
|
||||
if [[ -t 1 ]]; then
|
||||
@@ -174,7 +157,7 @@ clean_project_caches() {
|
||||
local pycache_tmp_file
|
||||
pycache_tmp_file=$(create_temp_file)
|
||||
local find_timeout=10
|
||||
# 1. Start Next.js search
|
||||
# Parallel scans (Next.js and __pycache__).
|
||||
(
|
||||
command find "$HOME" -P -mount -type d -name ".next" -maxdepth 3 \
|
||||
-not -path "*/Library/*" \
|
||||
@@ -184,7 +167,6 @@ clean_project_caches() {
|
||||
2> /dev/null || true
|
||||
) > "$nextjs_tmp_file" 2>&1 &
|
||||
local next_pid=$!
|
||||
# 2. Start Python search
|
||||
(
|
||||
command find "$HOME" -P -mount -type d -name "__pycache__" -maxdepth 3 \
|
||||
-not -path "*/Library/*" \
|
||||
@@ -194,7 +176,6 @@ clean_project_caches() {
|
||||
2> /dev/null || true
|
||||
) > "$pycache_tmp_file" 2>&1 &
|
||||
local py_pid=$!
|
||||
# 3. Wait for both with timeout (using smaller intervals for better responsiveness)
|
||||
local elapsed=0
|
||||
local check_interval=0.2 # Check every 200ms instead of 1s for smoother experience
|
||||
while [[ $(echo "$elapsed < $find_timeout" | awk '{print ($1 < $2)}') -eq 1 ]]; do
|
||||
@@ -204,12 +185,10 @@ clean_project_caches() {
|
||||
sleep $check_interval
|
||||
elapsed=$(echo "$elapsed + $check_interval" | awk '{print $1 + $2}')
|
||||
done
|
||||
# 4. Clean up any stuck processes
|
||||
# Kill stuck scans after timeout.
|
||||
for pid in $next_pid $py_pid; do
|
||||
if kill -0 "$pid" 2> /dev/null; then
|
||||
# Send TERM signal first
|
||||
kill -TERM "$pid" 2> /dev/null || true
|
||||
# Wait up to 2 seconds for graceful termination
|
||||
local grace_period=0
|
||||
while [[ $grace_period -lt 20 ]]; do
|
||||
if ! kill -0 "$pid" 2> /dev/null; then
|
||||
@@ -218,11 +197,9 @@ clean_project_caches() {
|
||||
sleep 0.1
|
||||
((grace_period++))
|
||||
done
|
||||
# Force kill if still running
|
||||
if kill -0 "$pid" 2> /dev/null; then
|
||||
kill -KILL "$pid" 2> /dev/null || true
|
||||
fi
|
||||
# Final wait (should be instant now)
|
||||
wait "$pid" 2> /dev/null || true
|
||||
else
|
||||
wait "$pid" 2> /dev/null || true
|
||||
@@ -231,11 +208,9 @@ clean_project_caches() {
|
||||
if [[ -t 1 ]]; then
|
||||
stop_inline_spinner
|
||||
fi
|
||||
# 5. Process Next.js results
|
||||
while IFS= read -r next_dir; do
|
||||
[[ -d "$next_dir/cache" ]] && safe_clean "$next_dir/cache"/* "Next.js build cache" || true
|
||||
done < "$nextjs_tmp_file"
|
||||
# 6. Process Python results
|
||||
while IFS= read -r pycache; do
|
||||
[[ -d "$pycache" ]] && safe_clean "$pycache"/* "Python bytecode cache" || true
|
||||
done < "$pycache_tmp_file"
|
||||
|
||||
@@ -1,11 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Developer Tools Cleanup Module
|
||||
set -euo pipefail
|
||||
# Helper function to clean tool caches using their built-in commands
|
||||
# Args: $1 - description, $@ - command to execute
|
||||
# Env: DRY_RUN
|
||||
# so we just report the action if we can't easily find a path)
|
||||
# Note: Try to estimate potential savings (many tool caches don't have a direct path,
|
||||
# Tool cache helper (respects DRY_RUN).
|
||||
clean_tool_cache() {
|
||||
local description="$1"
|
||||
shift
|
||||
@@ -18,50 +14,38 @@ clean_tool_cache() {
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
# Clean npm cache (command + directories)
|
||||
# Env: DRY_RUN
|
||||
# npm cache clean clears official npm cache, safe_clean handles alternative package managers
|
||||
# npm/pnpm/yarn/bun caches.
|
||||
clean_dev_npm() {
|
||||
if command -v npm > /dev/null 2>&1; then
|
||||
# clean_tool_cache now calculates size before cleanup for better statistics
|
||||
clean_tool_cache "npm cache" npm cache clean --force
|
||||
note_activity
|
||||
fi
|
||||
# Clean pnpm store cache
|
||||
local pnpm_default_store=~/Library/pnpm/store
|
||||
if command -v pnpm > /dev/null 2>&1; then
|
||||
# Use pnpm's built-in prune command
|
||||
clean_tool_cache "pnpm cache" pnpm store prune
|
||||
# Get the actual store path to check if default is orphaned
|
||||
local pnpm_store_path
|
||||
start_section_spinner "Checking store path..."
|
||||
pnpm_store_path=$(run_with_timeout 2 pnpm store path 2> /dev/null) || pnpm_store_path=""
|
||||
stop_section_spinner
|
||||
# If store path is different from default, clean the orphaned default
|
||||
if [[ -n "$pnpm_store_path" && "$pnpm_store_path" != "$pnpm_default_store" ]]; then
|
||||
safe_clean "$pnpm_default_store"/* "Orphaned pnpm store"
|
||||
fi
|
||||
else
|
||||
# pnpm not installed, clean default location
|
||||
safe_clean "$pnpm_default_store"/* "pnpm store"
|
||||
fi
|
||||
note_activity
|
||||
# Clean alternative package manager caches
|
||||
safe_clean ~/.tnpm/_cacache/* "tnpm cache directory"
|
||||
safe_clean ~/.tnpm/_logs/* "tnpm logs"
|
||||
safe_clean ~/.yarn/cache/* "Yarn cache"
|
||||
safe_clean ~/.bun/install/cache/* "Bun cache"
|
||||
}
|
||||
# Clean Python/pip cache (command + directories)
|
||||
# Env: DRY_RUN
|
||||
# pip cache purge clears official pip cache, safe_clean handles other Python tools
|
||||
# Python/pip ecosystem caches.
|
||||
clean_dev_python() {
|
||||
if command -v pip3 > /dev/null 2>&1; then
|
||||
# clean_tool_cache now calculates size before cleanup for better statistics
|
||||
clean_tool_cache "pip cache" bash -c 'pip3 cache purge >/dev/null 2>&1 || true'
|
||||
note_activity
|
||||
fi
|
||||
# Clean Python ecosystem caches
|
||||
safe_clean ~/.pyenv/cache/* "pyenv cache"
|
||||
safe_clean ~/.cache/poetry/* "Poetry cache"
|
||||
safe_clean ~/.cache/uv/* "uv cache"
|
||||
@@ -76,28 +60,23 @@ clean_dev_python() {
|
||||
safe_clean ~/anaconda3/pkgs/* "Anaconda packages cache"
|
||||
safe_clean ~/.cache/wandb/* "Weights & Biases cache"
|
||||
}
|
||||
# Clean Go cache (command + directories)
|
||||
# Env: DRY_RUN
|
||||
# go clean handles build and module caches comprehensively
|
||||
# Go build/module caches.
|
||||
clean_dev_go() {
|
||||
if command -v go > /dev/null 2>&1; then
|
||||
# clean_tool_cache now calculates size before cleanup for better statistics
|
||||
clean_tool_cache "Go cache" bash -c 'go clean -modcache >/dev/null 2>&1 || true; go clean -cache >/dev/null 2>&1 || true'
|
||||
note_activity
|
||||
fi
|
||||
}
|
||||
# Clean Rust/cargo cache directories
|
||||
# Rust/cargo caches.
|
||||
clean_dev_rust() {
|
||||
safe_clean ~/.cargo/registry/cache/* "Rust cargo cache"
|
||||
safe_clean ~/.cargo/git/* "Cargo git cache"
|
||||
safe_clean ~/.rustup/downloads/* "Rust downloads cache"
|
||||
}
|
||||
# Env: DRY_RUN
|
||||
# Clean Docker cache (command + directories)
|
||||
# Docker caches (guarded by daemon check).
|
||||
clean_dev_docker() {
|
||||
if command -v docker > /dev/null 2>&1; then
|
||||
if [[ "$DRY_RUN" != "true" ]]; then
|
||||
# Check if Docker daemon is running (with timeout to prevent hanging)
|
||||
start_section_spinner "Checking Docker daemon..."
|
||||
local docker_running=false
|
||||
if run_with_timeout 3 docker info > /dev/null 2>&1; then
|
||||
@@ -107,7 +86,6 @@ clean_dev_docker() {
|
||||
if [[ "$docker_running" == "true" ]]; then
|
||||
clean_tool_cache "Docker build cache" docker builder prune -af
|
||||
else
|
||||
# Docker not running - silently skip without user interaction
|
||||
debug_log "Docker daemon not running, skipping Docker cache cleanup"
|
||||
fi
|
||||
else
|
||||
@@ -117,8 +95,7 @@ clean_dev_docker() {
|
||||
fi
|
||||
safe_clean ~/.docker/buildx/cache/* "Docker BuildX cache"
|
||||
}
|
||||
# Env: DRY_RUN
|
||||
# Clean Nix package manager
|
||||
# Nix garbage collection.
|
||||
clean_dev_nix() {
|
||||
if command -v nix-collect-garbage > /dev/null 2>&1; then
|
||||
if [[ "$DRY_RUN" != "true" ]]; then
|
||||
@@ -129,7 +106,7 @@ clean_dev_nix() {
|
||||
note_activity
|
||||
fi
|
||||
}
|
||||
# Clean cloud CLI tools cache
|
||||
# Cloud CLI caches.
|
||||
clean_dev_cloud() {
|
||||
safe_clean ~/.kube/cache/* "Kubernetes cache"
|
||||
safe_clean ~/.local/share/containers/storage/tmp/* "Container storage temp"
|
||||
@@ -137,7 +114,7 @@ clean_dev_cloud() {
|
||||
safe_clean ~/.config/gcloud/logs/* "Google Cloud logs"
|
||||
safe_clean ~/.azure/logs/* "Azure CLI logs"
|
||||
}
|
||||
# Clean frontend build tool caches
|
||||
# Frontend build caches.
|
||||
clean_dev_frontend() {
|
||||
safe_clean ~/.cache/typescript/* "TypeScript cache"
|
||||
safe_clean ~/.cache/electron/* "Electron cache"
|
||||
@@ -151,40 +128,29 @@ clean_dev_frontend() {
|
||||
safe_clean ~/.cache/eslint/* "ESLint cache"
|
||||
safe_clean ~/.cache/prettier/* "Prettier cache"
|
||||
}
|
||||
# Clean mobile development tools
|
||||
# iOS simulator cleanup can free significant space (70GB+ in some cases)
|
||||
# Simulator runtime caches can grow large over time
|
||||
# DeviceSupport files accumulate for each iOS version connected
|
||||
# Mobile dev caches (can be large).
|
||||
clean_dev_mobile() {
|
||||
# Clean Xcode unavailable simulators
|
||||
# Removes old and unused local iOS simulator data from old unused runtimes
|
||||
# Can free up significant space (70GB+ in some cases)
|
||||
if command -v xcrun > /dev/null 2>&1; then
|
||||
debug_log "Checking for unavailable Xcode simulators"
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
clean_tool_cache "Xcode unavailable simulators" xcrun simctl delete unavailable
|
||||
else
|
||||
start_section_spinner "Checking unavailable simulators..."
|
||||
# Run command manually to control UI output order
|
||||
if xcrun simctl delete unavailable > /dev/null 2>&1; then
|
||||
stop_section_spinner
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Xcode unavailable simulators"
|
||||
else
|
||||
stop_section_spinner
|
||||
# Silently fail or log error if needed, matching clean_tool_cache behavior
|
||||
fi
|
||||
fi
|
||||
note_activity
|
||||
fi
|
||||
# Clean iOS DeviceSupport - more comprehensive cleanup
|
||||
# DeviceSupport directories store debug symbols for each iOS version
|
||||
# Safe to clean caches and logs, but preserve device support files themselves
|
||||
# DeviceSupport caches/logs (preserve core support files).
|
||||
safe_clean ~/Library/Developer/Xcode/iOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "iOS device symbol cache"
|
||||
safe_clean ~/Library/Developer/Xcode/iOS\ DeviceSupport/*.log "iOS device support logs"
|
||||
safe_clean ~/Library/Developer/Xcode/watchOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "watchOS device symbol cache"
|
||||
safe_clean ~/Library/Developer/Xcode/tvOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "tvOS device symbol cache"
|
||||
# Clean simulator runtime caches
|
||||
# RuntimeRoot caches can accumulate system library caches
|
||||
# Simulator runtime caches.
|
||||
safe_clean ~/Library/Developer/CoreSimulator/Profiles/Runtimes/*/Contents/Resources/RuntimeRoot/System/Library/Caches/* "Simulator runtime cache"
|
||||
safe_clean ~/Library/Caches/Google/AndroidStudio*/* "Android Studio cache"
|
||||
safe_clean ~/Library/Caches/CocoaPods/* "CocoaPods cache"
|
||||
@@ -194,14 +160,14 @@ clean_dev_mobile() {
|
||||
safe_clean ~/Library/Developer/Xcode/UserData/IB\ Support/* "Xcode Interface Builder cache"
|
||||
safe_clean ~/.cache/swift-package-manager/* "Swift package manager cache"
|
||||
}
|
||||
# Clean JVM ecosystem tools
|
||||
# JVM ecosystem caches.
|
||||
clean_dev_jvm() {
|
||||
safe_clean ~/.gradle/caches/* "Gradle caches"
|
||||
safe_clean ~/.gradle/daemon/* "Gradle daemon logs"
|
||||
safe_clean ~/.sbt/* "SBT cache"
|
||||
safe_clean ~/.ivy2/cache/* "Ivy cache"
|
||||
}
|
||||
# Clean other language tools
|
||||
# Other language tool caches.
|
||||
clean_dev_other_langs() {
|
||||
safe_clean ~/.bundle/cache/* "Ruby Bundler cache"
|
||||
safe_clean ~/.composer/cache/* "PHP Composer cache"
|
||||
@@ -211,7 +177,7 @@ clean_dev_other_langs() {
|
||||
safe_clean ~/.cache/zig/* "Zig cache"
|
||||
safe_clean ~/Library/Caches/deno/* "Deno cache"
|
||||
}
|
||||
# Clean CI/CD and DevOps tools
|
||||
# CI/CD and DevOps caches.
|
||||
clean_dev_cicd() {
|
||||
safe_clean ~/.cache/terraform/* "Terraform cache"
|
||||
safe_clean ~/.grafana/cache/* "Grafana cache"
|
||||
@@ -222,7 +188,7 @@ clean_dev_cicd() {
|
||||
safe_clean ~/.circleci/cache/* "CircleCI cache"
|
||||
safe_clean ~/.sonar/* "SonarQube cache"
|
||||
}
|
||||
# Clean database tools
|
||||
# Database tool caches.
|
||||
clean_dev_database() {
|
||||
safe_clean ~/Library/Caches/com.sequel-ace.sequel-ace/* "Sequel Ace cache"
|
||||
safe_clean ~/Library/Caches/com.eggerapps.Sequel-Pro/* "Sequel Pro cache"
|
||||
@@ -231,7 +197,7 @@ clean_dev_database() {
|
||||
safe_clean ~/Library/Caches/com.dbeaver.* "DBeaver cache"
|
||||
safe_clean ~/Library/Caches/com.redis.RedisInsight "Redis Insight cache"
|
||||
}
|
||||
# Clean API/network debugging tools
|
||||
# API/debugging tool caches.
|
||||
clean_dev_api_tools() {
|
||||
safe_clean ~/Library/Caches/com.postmanlabs.mac/* "Postman cache"
|
||||
safe_clean ~/Library/Caches/com.konghq.insomnia/* "Insomnia cache"
|
||||
@@ -240,7 +206,7 @@ clean_dev_api_tools() {
|
||||
safe_clean ~/Library/Caches/com.charlesproxy.charles/* "Charles Proxy cache"
|
||||
safe_clean ~/Library/Caches/com.proxyman.NSProxy/* "Proxyman cache"
|
||||
}
|
||||
# Clean misc dev tools
|
||||
# Misc dev tool caches.
|
||||
clean_dev_misc() {
|
||||
safe_clean ~/Library/Caches/com.unity3d.*/* "Unity cache"
|
||||
safe_clean ~/Library/Caches/com.mongodb.compass/* "MongoDB Compass cache"
|
||||
@@ -250,7 +216,7 @@ clean_dev_misc() {
|
||||
safe_clean ~/Library/Caches/KSCrash/* "KSCrash reports"
|
||||
safe_clean ~/Library/Caches/com.crashlytics.data/* "Crashlytics data"
|
||||
}
|
||||
# Clean shell and version control
|
||||
# Shell and VCS leftovers.
|
||||
clean_dev_shell() {
|
||||
safe_clean ~/.gitconfig.lock "Git config lock"
|
||||
safe_clean ~/.gitconfig.bak* "Git config backup"
|
||||
@@ -260,28 +226,20 @@ clean_dev_shell() {
|
||||
safe_clean ~/.zsh_history.bak* "Zsh history backup"
|
||||
safe_clean ~/.cache/pre-commit/* "pre-commit cache"
|
||||
}
|
||||
# Clean network utilities
|
||||
# Network tool caches.
|
||||
clean_dev_network() {
|
||||
safe_clean ~/.cache/curl/* "curl cache"
|
||||
safe_clean ~/.cache/wget/* "wget cache"
|
||||
safe_clean ~/Library/Caches/curl/* "macOS curl cache"
|
||||
safe_clean ~/Library/Caches/wget/* "macOS wget cache"
|
||||
}
|
||||
# Clean orphaned SQLite temporary files (-shm and -wal files)
|
||||
# Strategy: Only clean truly orphaned temp files where base database is missing
|
||||
# Env: DRY_RUN
|
||||
# This is fast and safe - skip complex checks for files with existing base DB
|
||||
# Orphaned SQLite temp files (-shm/-wal). Disabled due to low ROI.
|
||||
clean_sqlite_temp_files() {
|
||||
# Skip this cleanup due to low ROI (收益比低,经常没东西可清理)
|
||||
# Find scan is still slow even optimized, and orphaned files are rare
|
||||
return 0
|
||||
}
|
||||
# Main developer tools cleanup function
|
||||
# Env: DRY_RUN
|
||||
# Calls all specialized cleanup functions
|
||||
# Main developer tools cleanup sequence.
|
||||
clean_developer_tools() {
|
||||
stop_section_spinner
|
||||
# Clean SQLite temporary files first
|
||||
clean_sqlite_temp_files
|
||||
clean_dev_npm
|
||||
clean_dev_python
|
||||
@@ -292,7 +250,6 @@ clean_developer_tools() {
|
||||
clean_dev_nix
|
||||
clean_dev_shell
|
||||
clean_dev_frontend
|
||||
# Project build caches (delegated to clean_caches module)
|
||||
clean_project_caches
|
||||
clean_dev_mobile
|
||||
clean_dev_jvm
|
||||
@@ -302,22 +259,17 @@ clean_developer_tools() {
|
||||
clean_dev_api_tools
|
||||
clean_dev_network
|
||||
clean_dev_misc
|
||||
# Homebrew caches and cleanup (delegated to clean_brew module)
|
||||
safe_clean ~/Library/Caches/Homebrew/* "Homebrew cache"
|
||||
# Clean Homebrew locks intelligently (avoid repeated sudo prompts)
|
||||
# Clean Homebrew locks without repeated sudo prompts.
|
||||
local brew_lock_dirs=(
|
||||
"/opt/homebrew/var/homebrew/locks"
|
||||
"/usr/local/var/homebrew/locks"
|
||||
)
|
||||
for lock_dir in "${brew_lock_dirs[@]}"; do
|
||||
if [[ -d "$lock_dir" && -w "$lock_dir" ]]; then
|
||||
# User can write, safe to clean
|
||||
safe_clean "$lock_dir"/* "Homebrew lock files"
|
||||
elif [[ -d "$lock_dir" ]]; then
|
||||
# Directory exists but not writable. Check if empty to avoid noise.
|
||||
if find "$lock_dir" -mindepth 1 -maxdepth 1 -print -quit 2> /dev/null | grep -q .; then
|
||||
# Only try sudo ONCE if we really need to, or just skip to avoid spam
|
||||
# Decision: Skip strict system/root owned locks to avoid nag.
|
||||
debug_log "Skipping read-only Homebrew locks in $lock_dir"
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
# Project Purge Module (mo purge)
|
||||
# Removes heavy project build artifacts and dependencies
|
||||
# Project Purge Module (mo purge).
|
||||
# Removes heavy project build artifacts and dependencies.
|
||||
set -euo pipefail
|
||||
|
||||
PROJECT_LIB_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
@@ -10,7 +10,7 @@ if ! command -v ensure_user_dir > /dev/null 2>&1; then
|
||||
source "$CORE_LIB_DIR/common.sh"
|
||||
fi
|
||||
|
||||
# Targets to look for (heavy build artifacts)
|
||||
# Targets to look for (heavy build artifacts).
|
||||
readonly PURGE_TARGETS=(
|
||||
"node_modules"
|
||||
"target" # Rust, Maven
|
||||
@@ -29,12 +29,12 @@ readonly PURGE_TARGETS=(
|
||||
".parcel-cache" # Parcel bundler
|
||||
".dart_tool" # Flutter/Dart build cache
|
||||
)
|
||||
# Minimum age in days before considering for cleanup
|
||||
# Minimum age in days before considering for cleanup.
|
||||
readonly MIN_AGE_DAYS=7
|
||||
# Scan depth defaults (relative to search root)
|
||||
# Scan depth defaults (relative to search root).
|
||||
readonly PURGE_MIN_DEPTH_DEFAULT=2
|
||||
readonly PURGE_MAX_DEPTH_DEFAULT=8
|
||||
# Search paths (default, can be overridden via config file)
|
||||
# Search paths (default, can be overridden via config file).
|
||||
readonly DEFAULT_PURGE_SEARCH_PATHS=(
|
||||
"$HOME/www"
|
||||
"$HOME/dev"
|
||||
@@ -46,13 +46,13 @@ readonly DEFAULT_PURGE_SEARCH_PATHS=(
|
||||
"$HOME/Development"
|
||||
)
|
||||
|
||||
# Config file for custom purge paths
|
||||
# Config file for custom purge paths.
|
||||
readonly PURGE_CONFIG_FILE="$HOME/.config/mole/purge_paths"
|
||||
|
||||
# Global array to hold actual search paths
|
||||
# Resolved search paths.
|
||||
PURGE_SEARCH_PATHS=()
|
||||
|
||||
# Project indicator files (if a directory contains these, it's likely a project)
|
||||
# Project indicators for container detection.
|
||||
readonly PROJECT_INDICATORS=(
|
||||
"package.json"
|
||||
"Cargo.toml"
|
||||
@@ -68,12 +68,12 @@ readonly PROJECT_INDICATORS=(
|
||||
".git"
|
||||
)
|
||||
|
||||
# Check if a directory contains projects (directly or in subdirectories)
|
||||
# Check if a directory contains projects (directly or in subdirectories).
|
||||
is_project_container() {
|
||||
local dir="$1"
|
||||
local max_depth="${2:-2}"
|
||||
|
||||
# Skip hidden directories and system directories
|
||||
# Skip hidden/system directories.
|
||||
local basename
|
||||
basename=$(basename "$dir")
|
||||
[[ "$basename" == .* ]] && return 1
|
||||
@@ -84,7 +84,7 @@ is_project_container() {
|
||||
[[ "$basename" == "Pictures" ]] && return 1
|
||||
[[ "$basename" == "Public" ]] && return 1
|
||||
|
||||
# Build find expression with all indicators (single find call for efficiency)
|
||||
# Single find expression for indicators.
|
||||
local -a find_args=("$dir" "-maxdepth" "$max_depth" "(")
|
||||
local first=true
|
||||
for indicator in "${PROJECT_INDICATORS[@]}"; do
|
||||
@@ -97,7 +97,6 @@ is_project_container() {
|
||||
done
|
||||
find_args+=(")" "-print" "-quit")
|
||||
|
||||
# Single find call to check all indicators at once
|
||||
if find "${find_args[@]}" 2> /dev/null | grep -q .; then
|
||||
return 0
|
||||
fi
|
||||
@@ -105,24 +104,22 @@ is_project_container() {
|
||||
return 1
|
||||
}
|
||||
|
||||
# Discover project directories in $HOME
|
||||
# Discover project directories in $HOME.
|
||||
discover_project_dirs() {
|
||||
local -a discovered=()
|
||||
|
||||
# First check default paths that exist
|
||||
for path in "${DEFAULT_PURGE_SEARCH_PATHS[@]}"; do
|
||||
if [[ -d "$path" ]]; then
|
||||
discovered+=("$path")
|
||||
fi
|
||||
done
|
||||
|
||||
# Then scan $HOME for other project containers (depth 1)
|
||||
# Scan $HOME for other containers (depth 1).
|
||||
local dir
|
||||
for dir in "$HOME"/*/; do
|
||||
[[ ! -d "$dir" ]] && continue
|
||||
dir="${dir%/}" # Remove trailing slash
|
||||
|
||||
# Skip if already in defaults
|
||||
local already_found=false
|
||||
for existing in "${DEFAULT_PURGE_SEARCH_PATHS[@]}"; do
|
||||
if [[ "$dir" == "$existing" ]]; then
|
||||
@@ -132,17 +129,15 @@ discover_project_dirs() {
|
||||
done
|
||||
[[ "$already_found" == "true" ]] && continue
|
||||
|
||||
# Check if this directory contains projects
|
||||
if is_project_container "$dir" 2; then
|
||||
discovered+=("$dir")
|
||||
fi
|
||||
done
|
||||
|
||||
# Return unique paths
|
||||
printf '%s\n' "${discovered[@]}" | sort -u
|
||||
}
|
||||
|
||||
# Save discovered paths to config
|
||||
# Save discovered paths to config.
|
||||
save_discovered_paths() {
|
||||
local -a paths=("$@")
|
||||
|
||||
@@ -166,26 +161,20 @@ EOF
|
||||
load_purge_config() {
|
||||
PURGE_SEARCH_PATHS=()
|
||||
|
||||
# Try loading from config file
|
||||
if [[ -f "$PURGE_CONFIG_FILE" ]]; then
|
||||
while IFS= read -r line; do
|
||||
# Remove leading/trailing whitespace
|
||||
line="${line#"${line%%[![:space:]]*}"}"
|
||||
line="${line%"${line##*[![:space:]]}"}"
|
||||
|
||||
# Skip empty lines and comments
|
||||
[[ -z "$line" || "$line" =~ ^# ]] && continue
|
||||
|
||||
# Expand tilde to HOME
|
||||
line="${line/#\~/$HOME}"
|
||||
|
||||
PURGE_SEARCH_PATHS+=("$line")
|
||||
done < "$PURGE_CONFIG_FILE"
|
||||
fi
|
||||
|
||||
# If no paths loaded, auto-discover and save
|
||||
if [[ ${#PURGE_SEARCH_PATHS[@]} -eq 0 ]]; then
|
||||
# Show discovery message if running interactively
|
||||
if [[ -t 1 ]] && [[ -z "${_PURGE_DISCOVERY_SILENT:-}" ]]; then
|
||||
echo -e "${GRAY}First run: discovering project directories...${NC}" >&2
|
||||
fi
|
||||
@@ -197,47 +186,37 @@ load_purge_config() {
|
||||
|
||||
if [[ ${#discovered[@]} -gt 0 ]]; then
|
||||
PURGE_SEARCH_PATHS=("${discovered[@]}")
|
||||
# Save for next time
|
||||
save_discovered_paths "${discovered[@]}"
|
||||
|
||||
if [[ -t 1 ]] && [[ -z "${_PURGE_DISCOVERY_SILENT:-}" ]]; then
|
||||
echo -e "${GRAY}Found ${#discovered[@]} project directories, saved to config${NC}" >&2
|
||||
fi
|
||||
else
|
||||
# Fallback to defaults if nothing found
|
||||
PURGE_SEARCH_PATHS=("${DEFAULT_PURGE_SEARCH_PATHS[@]}")
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialize paths on script load
|
||||
# Initialize paths on script load.
|
||||
load_purge_config
|
||||
|
||||
# Args: $1 - path to check
|
||||
# Check if path is safe to clean (must be inside a project directory)
|
||||
# Safe cleanup requires the path be inside a project directory.
|
||||
is_safe_project_artifact() {
|
||||
local path="$1"
|
||||
local search_path="$2"
|
||||
# Path must be absolute
|
||||
if [[ "$path" != /* ]]; then
|
||||
return 1
|
||||
fi
|
||||
# Must not be a direct child of HOME directory
|
||||
# e.g., ~/.gradle is NOT safe, but ~/Projects/foo/.gradle IS safe
|
||||
# Must not be a direct child of the search root.
|
||||
local relative_path="${path#"$search_path"/}"
|
||||
local depth=$(echo "$relative_path" | tr -cd '/' | wc -c)
|
||||
# Require at least 1 level deep (inside a project folder)
|
||||
# e.g., ~/www/weekly/node_modules is OK (depth >= 1)
|
||||
# but ~/www/node_modules is NOT OK (depth < 1)
|
||||
if [[ $depth -lt 1 ]]; then
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
# Fast scan using fd or optimized find
|
||||
# Args: $1 - search path, $2 - output file
|
||||
# Args: $1 - search path, $2 - output file
|
||||
# Scan for purge targets using strict project boundary checks
|
||||
# Scan purge targets using fd (fast) or pruned find.
|
||||
scan_purge_targets() {
|
||||
local search_path="$1"
|
||||
local output_file="$2"
|
||||
@@ -255,7 +234,6 @@ scan_purge_targets() {
|
||||
if [[ ! -d "$search_path" ]]; then
|
||||
return
|
||||
fi
|
||||
# Use fd for fast parallel search if available
|
||||
if command -v fd > /dev/null 2>&1; then
|
||||
local fd_args=(
|
||||
"--absolute-path"
|
||||
@@ -273,47 +251,28 @@ scan_purge_targets() {
|
||||
for target in "${PURGE_TARGETS[@]}"; do
|
||||
fd_args+=("-g" "$target")
|
||||
done
|
||||
# Run fd command
|
||||
fd "${fd_args[@]}" . "$search_path" 2> /dev/null | while IFS= read -r item; do
|
||||
if is_safe_project_artifact "$item" "$search_path"; then
|
||||
echo "$item"
|
||||
fi
|
||||
done | filter_nested_artifacts > "$output_file"
|
||||
else
|
||||
# Fallback to optimized find with pruning
|
||||
# This prevents descending into heavily nested dirs like node_modules once found,
|
||||
# providing a massive speedup (O(project_dirs) vs O(files)).
|
||||
# Pruned find avoids descending into heavy directories.
|
||||
local prune_args=()
|
||||
# 1. Directories to prune (ignore completely)
|
||||
local prune_dirs=(".git" "Library" ".Trash" "Applications")
|
||||
for dir in "${prune_dirs[@]}"; do
|
||||
# -name "DIR" -prune -o
|
||||
prune_args+=("-name" "$dir" "-prune" "-o")
|
||||
done
|
||||
# 2. Targets to find (print AND prune)
|
||||
# If we find node_modules, we print it and STOP looking inside it
|
||||
for target in "${PURGE_TARGETS[@]}"; do
|
||||
# -name "TARGET" -print -prune -o
|
||||
prune_args+=("-name" "$target" "-print" "-prune" "-o")
|
||||
done
|
||||
# Run find command
|
||||
# Logic: ( prune_pattern -prune -o target_pattern -print -prune )
|
||||
# Note: We rely on implicit recursion for directories that don't match any pattern.
|
||||
# -print is only called explicitly on targets.
|
||||
# Removing the trailing -o from loop construction if necessary?
|
||||
# Actually my loop adds -o at the end. I need to handle that.
|
||||
# Let's verify the array construction.
|
||||
# Re-building args cleanly:
|
||||
local find_expr=()
|
||||
# Excludes
|
||||
for dir in "${prune_dirs[@]}"; do
|
||||
find_expr+=("-name" "$dir" "-prune" "-o")
|
||||
done
|
||||
# Targets
|
||||
local i=0
|
||||
for target in "${PURGE_TARGETS[@]}"; do
|
||||
find_expr+=("-name" "$target" "-print" "-prune")
|
||||
# Add -o unless it's the very last item of targets
|
||||
if [[ $i -lt $((${#PURGE_TARGETS[@]} - 1)) ]]; then
|
||||
find_expr+=("-o")
|
||||
fi
|
||||
@@ -327,15 +286,12 @@ scan_purge_targets() {
|
||||
done | filter_nested_artifacts > "$output_file"
|
||||
fi
|
||||
}
|
||||
# Filter out nested artifacts (e.g. node_modules inside node_modules)
|
||||
# Filter out nested artifacts (e.g. node_modules inside node_modules).
|
||||
filter_nested_artifacts() {
|
||||
while IFS= read -r item; do
|
||||
local parent_dir=$(dirname "$item")
|
||||
local is_nested=false
|
||||
for target in "${PURGE_TARGETS[@]}"; do
|
||||
# Check if parent directory IS a target or IS INSIDE a target
|
||||
# e.g. .../node_modules/foo/node_modules -> parent has node_modules
|
||||
# Use more strict matching to avoid false positives like "my_node_modules_backup"
|
||||
if [[ "$parent_dir" == *"/$target/"* || "$parent_dir" == *"/$target" ]]; then
|
||||
is_nested=true
|
||||
break
|
||||
@@ -347,14 +303,13 @@ filter_nested_artifacts() {
|
||||
done
|
||||
}
|
||||
# Args: $1 - path
|
||||
# Check if a path was modified recently (safety check)
|
||||
# Check if a path was modified recently (safety check).
|
||||
is_recently_modified() {
|
||||
local path="$1"
|
||||
local age_days=$MIN_AGE_DAYS
|
||||
if [[ ! -e "$path" ]]; then
|
||||
return 1
|
||||
fi
|
||||
# Get modification time using base.sh helper (handles GNU vs BSD stat)
|
||||
local mod_time
|
||||
mod_time=$(get_file_mtime "$path")
|
||||
local current_time=$(date +%s)
|
||||
@@ -367,7 +322,7 @@ is_recently_modified() {
|
||||
fi
|
||||
}
|
||||
# Args: $1 - path
|
||||
# Get human-readable size of directory
|
||||
# Get directory size in KB.
|
||||
get_dir_size_kb() {
|
||||
local path="$1"
|
||||
if [[ -d "$path" ]]; then
|
||||
@@ -376,10 +331,7 @@ get_dir_size_kb() {
|
||||
echo "0"
|
||||
fi
|
||||
}
|
||||
# Simple category selector (for purge only)
|
||||
# Args: category names and metadata as arrays (passed via global vars)
|
||||
# Uses PURGE_RECENT_CATEGORIES to mark categories with recent items (default unselected)
|
||||
# Returns: selected indices in PURGE_SELECTION_RESULT (comma-separated)
|
||||
# Purge category selector.
|
||||
select_purge_categories() {
|
||||
local -a categories=("$@")
|
||||
local total_items=${#categories[@]}
|
||||
@@ -388,8 +340,7 @@ select_purge_categories() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Calculate items per page based on terminal height
|
||||
# Reserved: header(2) + blank(2) + footer(1) = 5 rows
|
||||
# Calculate items per page based on terminal height.
|
||||
_get_items_per_page() {
|
||||
local term_height=24
|
||||
if [[ -t 0 ]] || [[ -t 2 ]]; then
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
#!/bin/bash
|
||||
# System-Level Cleanup Module
|
||||
# Deep system cleanup (requires sudo) and Time Machine failed backups
|
||||
# System-Level Cleanup Module (requires sudo).
|
||||
set -euo pipefail
|
||||
# Deep system cleanup (requires sudo)
|
||||
# System caches, logs, and temp files.
|
||||
clean_deep_system() {
|
||||
stop_section_spinner
|
||||
# Clean old system caches
|
||||
local cache_cleaned=0
|
||||
safe_sudo_find_delete "/Library/Caches" "*.cache" "$MOLE_TEMP_FILE_AGE_DAYS" "f" && cache_cleaned=1 || true
|
||||
safe_sudo_find_delete "/Library/Caches" "*.tmp" "$MOLE_TEMP_FILE_AGE_DAYS" "f" && cache_cleaned=1 || true
|
||||
@@ -15,7 +13,6 @@ clean_deep_system() {
|
||||
safe_sudo_find_delete "/private/tmp" "*" "${MOLE_TEMP_FILE_AGE_DAYS}" "f" && tmp_cleaned=1 || true
|
||||
safe_sudo_find_delete "/private/var/tmp" "*" "${MOLE_TEMP_FILE_AGE_DAYS}" "f" && tmp_cleaned=1 || true
|
||||
[[ $tmp_cleaned -eq 1 ]] && log_success "System temp files"
|
||||
# Clean crash reports
|
||||
safe_sudo_find_delete "/Library/Logs/DiagnosticReports" "*" "$MOLE_CRASH_REPORT_AGE_DAYS" "f" || true
|
||||
log_success "System crash reports"
|
||||
safe_sudo_find_delete "/private/var/log" "*.log" "$MOLE_LOG_AGE_DAYS" "f" || true
|
||||
@@ -91,18 +88,15 @@ clean_deep_system() {
|
||||
stop_section_spinner
|
||||
[[ $diag_logs_cleaned -eq 1 ]] && log_success "System diagnostic trace logs"
|
||||
}
|
||||
# Clean incomplete Time Machine backups
|
||||
# Incomplete Time Machine backups.
|
||||
clean_time_machine_failed_backups() {
|
||||
local tm_cleaned=0
|
||||
# Check if tmutil is available
|
||||
if ! command -v tmutil > /dev/null 2>&1; then
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||
return 0
|
||||
fi
|
||||
# Start spinner early (before potentially slow tmutil command)
|
||||
start_section_spinner "Checking Time Machine configuration..."
|
||||
local spinner_active=true
|
||||
# Check if Time Machine is configured (with short timeout for faster response)
|
||||
local tm_info
|
||||
tm_info=$(run_with_timeout 2 tmutil destinationinfo 2>&1 || echo "failed")
|
||||
if [[ "$tm_info" == *"No destinations configured"* || "$tm_info" == "failed" ]]; then
|
||||
@@ -119,7 +113,6 @@ clean_time_machine_failed_backups() {
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||
return 0
|
||||
fi
|
||||
# Skip if backup is running (check actual Running status, not just daemon existence)
|
||||
if tmutil status 2> /dev/null | grep -q "Running = 1"; then
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
stop_section_spinner
|
||||
@@ -127,22 +120,19 @@ clean_time_machine_failed_backups() {
|
||||
echo -e " ${YELLOW}!${NC} Time Machine backup in progress, skipping cleanup"
|
||||
return 0
|
||||
fi
|
||||
# Update spinner message for volume scanning
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
start_section_spinner "Checking backup volumes..."
|
||||
fi
|
||||
# Fast pre-scan: check which volumes have Backups.backupdb (avoid expensive tmutil checks)
|
||||
# Fast pre-scan for backup volumes to avoid slow tmutil checks.
|
||||
local -a backup_volumes=()
|
||||
for volume in /Volumes/*; do
|
||||
[[ -d "$volume" ]] || continue
|
||||
[[ "$volume" == "/Volumes/MacintoshHD" || "$volume" == "/" ]] && continue
|
||||
[[ -L "$volume" ]] && continue
|
||||
# Quick check: does this volume have backup directories?
|
||||
if [[ -d "$volume/Backups.backupdb" ]] || [[ -d "$volume/.MobileBackups" ]]; then
|
||||
backup_volumes+=("$volume")
|
||||
fi
|
||||
done
|
||||
# If no backup volumes found, stop spinner and return
|
||||
if [[ ${#backup_volumes[@]} -eq 0 ]]; then
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
stop_section_spinner
|
||||
@@ -150,23 +140,20 @@ clean_time_machine_failed_backups() {
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||
return 0
|
||||
fi
|
||||
# Update spinner message: we have potential backup volumes, now scan them
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
start_section_spinner "Scanning backup volumes..."
|
||||
fi
|
||||
for volume in "${backup_volumes[@]}"; do
|
||||
# Skip network volumes (quick check)
|
||||
local fs_type
|
||||
fs_type=$(run_with_timeout 1 command df -T "$volume" 2> /dev/null | tail -1 | awk '{print $2}' || echo "unknown")
|
||||
case "$fs_type" in
|
||||
nfs | smbfs | afpfs | cifs | webdav | unknown) continue ;;
|
||||
esac
|
||||
# HFS+ style backups (Backups.backupdb)
|
||||
local backupdb_dir="$volume/Backups.backupdb"
|
||||
if [[ -d "$backupdb_dir" ]]; then
|
||||
while IFS= read -r inprogress_file; do
|
||||
[[ -d "$inprogress_file" ]] || continue
|
||||
# Only delete old incomplete backups (safety window)
|
||||
# Only delete old incomplete backups (safety window).
|
||||
local file_mtime=$(get_file_mtime "$inprogress_file")
|
||||
local current_time=$(date +%s)
|
||||
local hours_old=$(((current_time - file_mtime) / 3600))
|
||||
@@ -175,7 +162,6 @@ clean_time_machine_failed_backups() {
|
||||
fi
|
||||
local size_kb=$(get_path_size_kb "$inprogress_file")
|
||||
[[ "$size_kb" -le 0 ]] && continue
|
||||
# Stop spinner before first output
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
stop_section_spinner
|
||||
spinner_active=false
|
||||
@@ -188,7 +174,6 @@ clean_time_machine_failed_backups() {
|
||||
note_activity
|
||||
continue
|
||||
fi
|
||||
# Real deletion
|
||||
if ! command -v tmutil > /dev/null 2>&1; then
|
||||
echo -e " ${YELLOW}!${NC} tmutil not available, skipping: $backup_name"
|
||||
continue
|
||||
@@ -205,17 +190,15 @@ clean_time_machine_failed_backups() {
|
||||
fi
|
||||
done < <(run_with_timeout 15 find "$backupdb_dir" -maxdepth 3 -type d \( -name "*.inProgress" -o -name "*.inprogress" \) 2> /dev/null || true)
|
||||
fi
|
||||
# APFS style backups (.backupbundle or .sparsebundle)
|
||||
# APFS bundles.
|
||||
for bundle in "$volume"/*.backupbundle "$volume"/*.sparsebundle; do
|
||||
[[ -e "$bundle" ]] || continue
|
||||
[[ -d "$bundle" ]] || continue
|
||||
# Check if bundle is mounted
|
||||
local bundle_name=$(basename "$bundle")
|
||||
local mounted_path=$(hdiutil info 2> /dev/null | grep -A 5 "image-path.*$bundle_name" | grep "/Volumes/" | awk '{print $1}' | head -1 || echo "")
|
||||
if [[ -n "$mounted_path" && -d "$mounted_path" ]]; then
|
||||
while IFS= read -r inprogress_file; do
|
||||
[[ -d "$inprogress_file" ]] || continue
|
||||
# Only delete old incomplete backups (safety window)
|
||||
local file_mtime=$(get_file_mtime "$inprogress_file")
|
||||
local current_time=$(date +%s)
|
||||
local hours_old=$(((current_time - file_mtime) / 3600))
|
||||
@@ -224,7 +207,6 @@ clean_time_machine_failed_backups() {
|
||||
fi
|
||||
local size_kb=$(get_path_size_kb "$inprogress_file")
|
||||
[[ "$size_kb" -le 0 ]] && continue
|
||||
# Stop spinner before first output
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
stop_section_spinner
|
||||
spinner_active=false
|
||||
@@ -237,7 +219,6 @@ clean_time_machine_failed_backups() {
|
||||
note_activity
|
||||
continue
|
||||
fi
|
||||
# Real deletion
|
||||
if ! command -v tmutil > /dev/null 2>&1; then
|
||||
continue
|
||||
fi
|
||||
@@ -255,7 +236,6 @@ clean_time_machine_failed_backups() {
|
||||
fi
|
||||
done
|
||||
done
|
||||
# Stop spinner if still active (no backups found)
|
||||
if [[ "$spinner_active" == "true" ]]; then
|
||||
stop_section_spinner
|
||||
fi
|
||||
@@ -263,33 +243,27 @@ clean_time_machine_failed_backups() {
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||
fi
|
||||
}
|
||||
# Clean local APFS snapshots (keep the most recent snapshot)
|
||||
# Local APFS snapshots (keep the most recent).
|
||||
clean_local_snapshots() {
|
||||
# Check if tmutil is available
|
||||
if ! command -v tmutil > /dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
start_section_spinner "Checking local snapshots..."
|
||||
# Check for local snapshots
|
||||
local snapshot_list
|
||||
snapshot_list=$(tmutil listlocalsnapshots / 2> /dev/null)
|
||||
stop_section_spinner
|
||||
[[ -z "$snapshot_list" ]] && return 0
|
||||
# Parse and clean snapshots
|
||||
local cleaned_count=0
|
||||
local total_cleaned_size=0 # Estimation not possible without thin
|
||||
local newest_ts=0
|
||||
local newest_name=""
|
||||
local -a snapshots=()
|
||||
# Find the most recent snapshot to keep at least one version
|
||||
while IFS= read -r line; do
|
||||
# Format: com.apple.TimeMachine.2023-10-25-120000
|
||||
if [[ "$line" =~ com\.apple\.TimeMachine\.([0-9]{4})-([0-9]{2})-([0-9]{2})-([0-9]{6}) ]]; then
|
||||
local snap_name="${BASH_REMATCH[0]}"
|
||||
snapshots+=("$snap_name")
|
||||
local date_str="${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]} ${BASH_REMATCH[4]:0:2}:${BASH_REMATCH[4]:2:2}:${BASH_REMATCH[4]:4:2}"
|
||||
local snap_ts=$(date -j -f "%Y-%m-%d %H:%M:%S" "$date_str" "+%s" 2> /dev/null || echo "0")
|
||||
# Skip if parsing failed
|
||||
[[ "$snap_ts" == "0" ]] && continue
|
||||
if [[ "$snap_ts" -gt "$newest_ts" ]]; then
|
||||
newest_ts="$snap_ts"
|
||||
@@ -332,16 +306,13 @@ clean_local_snapshots() {
|
||||
|
||||
local snap_name
|
||||
for snap_name in "${snapshots[@]}"; do
|
||||
# Format: com.apple.TimeMachine.2023-10-25-120000
|
||||
if [[ "$snap_name" =~ com\.apple\.TimeMachine\.([0-9]{4})-([0-9]{2})-([0-9]{2})-([0-9]{6}) ]]; then
|
||||
# Remove all but the most recent snapshot
|
||||
if [[ "${BASH_REMATCH[0]}" != "$newest_name" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} Local snapshot: $snap_name ${YELLOW}dry-run${NC}"
|
||||
((cleaned_count++))
|
||||
note_activity
|
||||
else
|
||||
# Secure removal
|
||||
if sudo tmutil deletelocalsnapshots "${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]}-${BASH_REMATCH[4]}" > /dev/null 2>&1; then
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Removed snapshot: $snap_name"
|
||||
((cleaned_count++))
|
||||
|
||||
@@ -62,7 +62,7 @@ scan_external_volumes() {
|
||||
done
|
||||
stop_section_spinner
|
||||
}
|
||||
# Clean Finder metadata (.DS_Store files)
|
||||
# Finder metadata (.DS_Store).
|
||||
clean_finder_metadata() {
|
||||
stop_section_spinner
|
||||
if [[ "$PROTECT_FINDER_METADATA" == "true" ]]; then
|
||||
@@ -72,16 +72,14 @@ clean_finder_metadata() {
|
||||
fi
|
||||
clean_ds_store_tree "$HOME" "Home directory (.DS_Store)"
|
||||
}
|
||||
# Clean macOS system caches
|
||||
# macOS system caches and user-level leftovers.
|
||||
clean_macos_system_caches() {
|
||||
stop_section_spinner
|
||||
# Clean saved application states with protection for System Settings
|
||||
# Note: safe_clean already calls should_protect_path for each file
|
||||
# safe_clean already checks protected paths.
|
||||
safe_clean ~/Library/Saved\ Application\ State/* "Saved application states" || true
|
||||
safe_clean ~/Library/Caches/com.apple.photoanalysisd "Photo analysis cache" || true
|
||||
safe_clean ~/Library/Caches/com.apple.akd "Apple ID cache" || true
|
||||
safe_clean ~/Library/Caches/com.apple.WebKit.Networking/* "WebKit network cache" || true
|
||||
# Extra user items
|
||||
safe_clean ~/Library/DiagnosticReports/* "Diagnostic reports" || true
|
||||
safe_clean ~/Library/Caches/com.apple.QuickLook.thumbnailcache "QuickLook thumbnails" || true
|
||||
safe_clean ~/Library/Caches/Quick\ Look/* "QuickLook cache" || true
|
||||
@@ -158,28 +156,26 @@ clean_mail_downloads() {
|
||||
note_activity
|
||||
fi
|
||||
}
|
||||
# Clean sandboxed app caches
|
||||
# Sandboxed app caches.
|
||||
clean_sandboxed_app_caches() {
|
||||
stop_section_spinner
|
||||
safe_clean ~/Library/Containers/com.apple.wallpaper.agent/Data/Library/Caches/* "Wallpaper agent cache"
|
||||
safe_clean ~/Library/Containers/com.apple.mediaanalysisd/Data/Library/Caches/* "Media analysis cache"
|
||||
safe_clean ~/Library/Containers/com.apple.AppStore/Data/Library/Caches/* "App Store cache"
|
||||
safe_clean ~/Library/Containers/com.apple.configurator.xpc.InternetService/Data/tmp/* "Apple Configurator temp files"
|
||||
# Clean sandboxed app caches - iterate quietly to avoid UI flashing
|
||||
local containers_dir="$HOME/Library/Containers"
|
||||
[[ ! -d "$containers_dir" ]] && return 0
|
||||
start_section_spinner "Scanning sandboxed apps..."
|
||||
local total_size=0
|
||||
local cleaned_count=0
|
||||
local found_any=false
|
||||
# Enable nullglob for safe globbing; restore afterwards
|
||||
# Use nullglob to avoid literal globs.
|
||||
local _ng_state
|
||||
_ng_state=$(shopt -p nullglob || true)
|
||||
shopt -s nullglob
|
||||
for container_dir in "$containers_dir"/*; do
|
||||
process_container_cache "$container_dir"
|
||||
done
|
||||
# Restore nullglob to previous state
|
||||
eval "$_ng_state"
|
||||
stop_section_spinner
|
||||
if [[ "$found_any" == "true" ]]; then
|
||||
@@ -195,11 +191,10 @@ clean_sandboxed_app_caches() {
|
||||
note_activity
|
||||
fi
|
||||
}
|
||||
# Process a single container cache directory (reduces nesting)
|
||||
# Process a single container cache directory.
|
||||
process_container_cache() {
|
||||
local container_dir="$1"
|
||||
[[ -d "$container_dir" ]] || return 0
|
||||
# Extract bundle ID and check protection status early
|
||||
local bundle_id=$(basename "$container_dir")
|
||||
if is_critical_system_component "$bundle_id"; then
|
||||
return 0
|
||||
@@ -208,17 +203,15 @@ process_container_cache() {
|
||||
return 0
|
||||
fi
|
||||
local cache_dir="$container_dir/Data/Library/Caches"
|
||||
# Check if dir exists and has content
|
||||
[[ -d "$cache_dir" ]] || return 0
|
||||
# Fast check if empty using find (more efficient than ls)
|
||||
# Fast non-empty check.
|
||||
if find "$cache_dir" -mindepth 1 -maxdepth 1 -print -quit 2> /dev/null | grep -q .; then
|
||||
# Use global variables from caller for tracking
|
||||
local size=$(get_path_size_kb "$cache_dir")
|
||||
((total_size += size))
|
||||
found_any=true
|
||||
((cleaned_count++))
|
||||
if [[ "$DRY_RUN" != "true" ]]; then
|
||||
# Clean contents safely with local nullglob management
|
||||
# Clean contents safely with local nullglob.
|
||||
local _ng_state
|
||||
_ng_state=$(shopt -p nullglob || true)
|
||||
shopt -s nullglob
|
||||
@@ -230,11 +223,11 @@ process_container_cache() {
|
||||
fi
|
||||
fi
|
||||
}
|
||||
# Clean browser caches (Safari, Chrome, Edge, Firefox, etc.)
|
||||
# Browser caches (Safari/Chrome/Edge/Firefox).
|
||||
clean_browsers() {
|
||||
stop_section_spinner
|
||||
safe_clean ~/Library/Caches/com.apple.Safari/* "Safari cache"
|
||||
# Chrome/Chromium
|
||||
# Chrome/Chromium.
|
||||
safe_clean ~/Library/Caches/Google/Chrome/* "Chrome cache"
|
||||
safe_clean ~/Library/Application\ Support/Google/Chrome/*/Application\ Cache/* "Chrome app cache"
|
||||
safe_clean ~/Library/Application\ Support/Google/Chrome/*/GPUCache/* "Chrome GPU cache"
|
||||
@@ -251,7 +244,7 @@ clean_browsers() {
|
||||
safe_clean ~/Library/Caches/zen/* "Zen cache"
|
||||
safe_clean ~/Library/Application\ Support/Firefox/Profiles/*/cache2/* "Firefox profile cache"
|
||||
}
|
||||
# Clean cloud storage app caches
|
||||
# Cloud storage caches.
|
||||
clean_cloud_storage() {
|
||||
stop_section_spinner
|
||||
safe_clean ~/Library/Caches/com.dropbox.* "Dropbox cache"
|
||||
@@ -262,7 +255,7 @@ clean_cloud_storage() {
|
||||
safe_clean ~/Library/Caches/com.box.desktop "Box cache"
|
||||
safe_clean ~/Library/Caches/com.microsoft.OneDrive "OneDrive cache"
|
||||
}
|
||||
# Clean office application caches
|
||||
# Office app caches.
|
||||
clean_office_applications() {
|
||||
stop_section_spinner
|
||||
safe_clean ~/Library/Caches/com.microsoft.Word "Microsoft Word cache"
|
||||
@@ -274,7 +267,7 @@ clean_office_applications() {
|
||||
safe_clean ~/Library/Caches/org.mozilla.thunderbird/* "Thunderbird cache"
|
||||
safe_clean ~/Library/Caches/com.apple.mail/* "Apple Mail cache"
|
||||
}
|
||||
# Clean virtualization tools
|
||||
# Virtualization caches.
|
||||
clean_virtualization_tools() {
|
||||
stop_section_spinner
|
||||
safe_clean ~/Library/Caches/com.vmware.fusion "VMware Fusion cache"
|
||||
@@ -282,7 +275,7 @@ clean_virtualization_tools() {
|
||||
safe_clean ~/VirtualBox\ VMs/.cache "VirtualBox cache"
|
||||
safe_clean ~/.vagrant.d/tmp/* "Vagrant temporary files"
|
||||
}
|
||||
# Clean Application Support logs and caches
|
||||
# Application Support logs/caches.
|
||||
clean_application_support_logs() {
|
||||
stop_section_spinner
|
||||
if [[ ! -d "$HOME/Library/Application Support" ]] || ! ls "$HOME/Library/Application Support" > /dev/null 2>&1; then
|
||||
@@ -294,11 +287,10 @@ clean_application_support_logs() {
|
||||
local total_size=0
|
||||
local cleaned_count=0
|
||||
local found_any=false
|
||||
# Enable nullglob for safe globbing
|
||||
# Enable nullglob for safe globbing.
|
||||
local _ng_state
|
||||
_ng_state=$(shopt -p nullglob || true)
|
||||
shopt -s nullglob
|
||||
# Clean log directories and cache patterns
|
||||
for app_dir in ~/Library/Application\ Support/*; do
|
||||
[[ -d "$app_dir" ]] || continue
|
||||
local app_name=$(basename "$app_dir")
|
||||
@@ -333,7 +325,7 @@ clean_application_support_logs() {
|
||||
fi
|
||||
done
|
||||
done
|
||||
# Clean Group Containers logs
|
||||
# Group Containers logs (explicit allowlist).
|
||||
local known_group_containers=(
|
||||
"group.com.apple.contentdelivery"
|
||||
)
|
||||
@@ -357,7 +349,6 @@ clean_application_support_logs() {
|
||||
fi
|
||||
done
|
||||
done
|
||||
# Restore nullglob to previous state
|
||||
eval "$_ng_state"
|
||||
stop_section_spinner
|
||||
if [[ "$found_any" == "true" ]]; then
|
||||
@@ -373,10 +364,10 @@ clean_application_support_logs() {
|
||||
note_activity
|
||||
fi
|
||||
}
|
||||
# Check and show iOS device backup info
|
||||
# iOS device backup info.
|
||||
check_ios_device_backups() {
|
||||
local backup_dir="$HOME/Library/Application Support/MobileSync/Backup"
|
||||
# Simplified check without find to avoid hanging
|
||||
# Simplified check without find to avoid hanging.
|
||||
if [[ -d "$backup_dir" ]]; then
|
||||
local backup_kb=$(get_path_size_kb "$backup_dir")
|
||||
if [[ -n "${backup_kb:-}" && "$backup_kb" -gt 102400 ]]; then
|
||||
@@ -390,8 +381,7 @@ check_ios_device_backups() {
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
# Env: IS_M_SERIES
|
||||
# Clean Apple Silicon specific caches
|
||||
# Apple Silicon specific caches (IS_M_SERIES).
|
||||
clean_apple_silicon_caches() {
|
||||
if [[ "${IS_M_SERIES:-false}" != "true" ]]; then
|
||||
return 0
|
||||
|
||||
@@ -12,9 +12,7 @@ readonly MOLE_APP_PROTECTION_LOADED=1
|
||||
_MOLE_CORE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
[[ -z "${MOLE_BASE_LOADED:-}" ]] && source "$_MOLE_CORE_DIR/base.sh"
|
||||
|
||||
# ============================================================================
|
||||
# Application Management
|
||||
# ============================================================================
|
||||
|
||||
# Critical system components protected from uninstallation
|
||||
readonly SYSTEM_CRITICAL_BUNDLES=(
|
||||
@@ -70,9 +68,7 @@ readonly SYSTEM_CRITICAL_BUNDLES=(
|
||||
|
||||
# Applications with sensitive data; protected during cleanup but removable
|
||||
readonly DATA_PROTECTED_BUNDLES=(
|
||||
# ============================================================================
|
||||
# System Utilities & Cleanup Tools
|
||||
# ============================================================================
|
||||
"com.nektony.*" # App Cleaner & Uninstaller
|
||||
"com.macpaw.*" # CleanMyMac, CleanMaster
|
||||
"com.freemacsoft.AppCleaner" # AppCleaner
|
||||
@@ -82,9 +78,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.grandperspectiv.*" # GrandPerspective
|
||||
"com.binaryfruit.*" # FusionCast
|
||||
|
||||
# ============================================================================
|
||||
# Password Managers & Security
|
||||
# ============================================================================
|
||||
"com.1password.*" # 1Password
|
||||
"com.agilebits.*" # 1Password legacy
|
||||
"com.lastpass.*" # LastPass
|
||||
@@ -95,9 +89,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.authy.*" # Authy
|
||||
"com.yubico.*" # YubiKey Manager
|
||||
|
||||
# ============================================================================
|
||||
# Development Tools - IDEs & Editors
|
||||
# ============================================================================
|
||||
"com.jetbrains.*" # JetBrains IDEs (IntelliJ, DataGrip, etc.)
|
||||
"JetBrains*" # JetBrains Application Support folders
|
||||
"com.microsoft.VSCode" # Visual Studio Code
|
||||
@@ -112,9 +104,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"abnerworks.Typora" # Typora (Markdown editor)
|
||||
"com.uranusjr.macdown" # MacDown
|
||||
|
||||
# ============================================================================
|
||||
# AI & LLM Tools
|
||||
# ============================================================================
|
||||
"com.todesktop.*" # Cursor (often uses generic todesktop ID)
|
||||
"Cursor" # Cursor App Support
|
||||
"com.anthropic.claude*" # Claude
|
||||
@@ -136,9 +126,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.quora.poe.electron" # Poe
|
||||
"chat.openai.com.*" # OpenAI web wrappers
|
||||
|
||||
# ============================================================================
|
||||
# Development Tools - Database Clients
|
||||
# ============================================================================
|
||||
"com.sequelpro.*" # Sequel Pro
|
||||
"com.sequel-ace.*" # Sequel Ace
|
||||
"com.tinyapp.*" # TablePlus
|
||||
@@ -151,9 +139,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.valentina-db.Valentina-Studio" # Valentina Studio
|
||||
"com.dbvis.DbVisualizer" # DbVisualizer
|
||||
|
||||
# ============================================================================
|
||||
# Development Tools - API & Network
|
||||
# ============================================================================
|
||||
"com.postmanlabs.mac" # Postman
|
||||
"com.konghq.insomnia" # Insomnia
|
||||
"com.CharlesProxy.*" # Charles Proxy
|
||||
@@ -164,9 +150,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.telerik.Fiddler" # Fiddler
|
||||
"com.usebruno.app" # Bruno (API client)
|
||||
|
||||
# ============================================================================
|
||||
# Network Proxy & VPN Tools (pattern-based protection)
|
||||
# ============================================================================
|
||||
# Clash variants
|
||||
"*clash*" # All Clash variants (ClashX, ClashX Pro, Clash Verge, etc)
|
||||
"*Clash*" # Capitalized variants
|
||||
@@ -217,9 +201,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"*Fliqlo*" # Fliqlo screensaver (all case variants)
|
||||
"*fliqlo*" # Fliqlo lowercase
|
||||
|
||||
# ============================================================================
|
||||
# Development Tools - Git & Version Control
|
||||
# ============================================================================
|
||||
"com.github.GitHubDesktop" # GitHub Desktop
|
||||
"com.sublimemerge" # Sublime Merge
|
||||
"com.torusknot.SourceTreeNotMAS" # SourceTree
|
||||
@@ -229,9 +211,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.fork.Fork" # Fork
|
||||
"com.axosoft.gitkraken" # GitKraken
|
||||
|
||||
# ============================================================================
|
||||
# Development Tools - Terminal & Shell
|
||||
# ============================================================================
|
||||
"com.googlecode.iterm2" # iTerm2
|
||||
"net.kovidgoyal.kitty" # Kitty
|
||||
"io.alacritty" # Alacritty
|
||||
@@ -242,9 +222,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"dev.warp.Warp-Stable" # Warp
|
||||
"com.termius-dmg" # Termius (SSH client)
|
||||
|
||||
# ============================================================================
|
||||
# Development Tools - Docker & Virtualization
|
||||
# ============================================================================
|
||||
"com.docker.docker" # Docker Desktop
|
||||
"com.getutm.UTM" # UTM
|
||||
"com.vmware.fusion" # VMware Fusion
|
||||
@@ -253,9 +231,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.vagrant.*" # Vagrant
|
||||
"com.orbstack.OrbStack" # OrbStack
|
||||
|
||||
# ============================================================================
|
||||
# System Monitoring & Performance
|
||||
# ============================================================================
|
||||
"com.bjango.istatmenus*" # iStat Menus
|
||||
"eu.exelban.Stats" # Stats
|
||||
"com.monitorcontrol.*" # MonitorControl
|
||||
@@ -264,9 +240,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.activity-indicator.app" # Activity Indicator
|
||||
"net.cindori.sensei" # Sensei
|
||||
|
||||
# ============================================================================
|
||||
# Window Management & Productivity
|
||||
# ============================================================================
|
||||
"com.macitbetter.*" # BetterTouchTool, BetterSnapTool
|
||||
"com.hegenberg.*" # BetterTouchTool legacy
|
||||
"com.manytricks.*" # Moom, Witch, Name Mangler, Resolutionator
|
||||
@@ -284,9 +258,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.gaosun.eul" # eul (system monitor)
|
||||
"com.pointum.hazeover" # HazeOver
|
||||
|
||||
# ============================================================================
|
||||
# Launcher & Automation
|
||||
# ============================================================================
|
||||
"com.runningwithcrayons.Alfred" # Alfred
|
||||
"com.raycast.macos" # Raycast
|
||||
"com.blacktree.Quicksilver" # Quicksilver
|
||||
@@ -297,9 +269,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"org.pqrs.Karabiner-Elements" # Karabiner-Elements
|
||||
"com.apple.Automator" # Automator (system, but keep user workflows)
|
||||
|
||||
# ============================================================================
|
||||
# Note-Taking & Documentation
|
||||
# ============================================================================
|
||||
"com.bear-writer.*" # Bear
|
||||
"com.typora.*" # Typora
|
||||
"com.ulyssesapp.*" # Ulysses
|
||||
@@ -318,9 +288,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.reflect.ReflectApp" # Reflect
|
||||
"com.inkdrop.*" # Inkdrop
|
||||
|
||||
# ============================================================================
|
||||
# Design & Creative Tools
|
||||
# ============================================================================
|
||||
"com.adobe.*" # Adobe Creative Suite
|
||||
"com.bohemiancoding.*" # Sketch
|
||||
"com.figma.*" # Figma
|
||||
@@ -338,9 +306,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.autodesk.*" # Autodesk products
|
||||
"com.sketchup.*" # SketchUp
|
||||
|
||||
# ============================================================================
|
||||
# Communication & Collaboration
|
||||
# ============================================================================
|
||||
"com.tencent.xinWeChat" # WeChat (Chinese users)
|
||||
"com.tencent.qq" # QQ
|
||||
"com.alibaba.DingTalkMac" # DingTalk
|
||||
@@ -363,9 +329,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.postbox-inc.postbox" # Postbox
|
||||
"com.tinyspeck.slackmacgap" # Slack legacy
|
||||
|
||||
# ============================================================================
|
||||
# Task Management & Productivity
|
||||
# ============================================================================
|
||||
"com.omnigroup.OmniFocus*" # OmniFocus
|
||||
"com.culturedcode.*" # Things
|
||||
"com.todoist.*" # Todoist
|
||||
@@ -380,9 +344,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.notion.id" # Notion (also note-taking)
|
||||
"com.linear.linear" # Linear
|
||||
|
||||
# ============================================================================
|
||||
# File Transfer & Sync
|
||||
# ============================================================================
|
||||
"com.panic.transmit*" # Transmit (FTP/SFTP)
|
||||
"com.binarynights.ForkLift*" # ForkLift
|
||||
"com.noodlesoft.Hazel" # Hazel
|
||||
@@ -391,9 +353,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.apple.Xcode.CloudDocuments" # Xcode Cloud Documents
|
||||
"com.synology.*" # Synology apps
|
||||
|
||||
# ============================================================================
|
||||
# Cloud Storage & Backup (Issue #204)
|
||||
# ============================================================================
|
||||
"com.dropbox.*" # Dropbox
|
||||
"com.getdropbox.*" # Dropbox legacy
|
||||
"*dropbox*" # Dropbox helpers/updaters
|
||||
@@ -420,9 +380,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.shirtpocket.*" # SuperDuper backup
|
||||
"homebrew.mxcl.*" # Homebrew services
|
||||
|
||||
# ============================================================================
|
||||
# Screenshot & Recording
|
||||
# ============================================================================
|
||||
"com.cleanshot.*" # CleanShot X
|
||||
"com.xnipapp.xnip" # Xnip
|
||||
"com.reincubate.camo" # Camo
|
||||
@@ -436,9 +394,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"com.linebreak.CloudApp" # CloudApp
|
||||
"com.droplr.droplr-mac" # Droplr
|
||||
|
||||
# ============================================================================
|
||||
# Media & Entertainment
|
||||
# ============================================================================
|
||||
"com.spotify.client" # Spotify
|
||||
"com.apple.Music" # Apple Music
|
||||
"com.apple.podcasts" # Apple Podcasts
|
||||
@@ -456,9 +412,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
||||
"tv.plex.player.desktop" # Plex
|
||||
"com.netease.163music" # NetEase Music
|
||||
|
||||
# ============================================================================
|
||||
# License Management & App Stores
|
||||
# ============================================================================
|
||||
"com.paddle.Paddle*" # Paddle (license management)
|
||||
"com.setapp.DesktopClient" # Setapp
|
||||
"com.devmate.*" # DevMate (license framework)
|
||||
|
||||
@@ -1,26 +1,19 @@
|
||||
#!/bin/bash
|
||||
# System Configuration Maintenance Module
|
||||
# Fix broken preferences and broken login items
|
||||
# System Configuration Maintenance Module.
|
||||
# Fix broken preferences and login items.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ============================================================================
|
||||
# Broken Preferences Detection and Cleanup
|
||||
# Find and remove corrupted .plist files
|
||||
# ============================================================================
|
||||
|
||||
# Clean corrupted preference files
|
||||
# Remove corrupted preference files.
|
||||
fix_broken_preferences() {
|
||||
local prefs_dir="$HOME/Library/Preferences"
|
||||
[[ -d "$prefs_dir" ]] || return 0
|
||||
|
||||
local broken_count=0
|
||||
|
||||
# Check main preferences directory
|
||||
while IFS= read -r plist_file; do
|
||||
[[ -f "$plist_file" ]] || continue
|
||||
|
||||
# Skip system preferences
|
||||
local filename
|
||||
filename=$(basename "$plist_file")
|
||||
case "$filename" in
|
||||
@@ -29,15 +22,13 @@ fix_broken_preferences() {
|
||||
;;
|
||||
esac
|
||||
|
||||
# Validate plist using plutil
|
||||
plutil -lint "$plist_file" > /dev/null 2>&1 && continue
|
||||
|
||||
# Remove broken plist
|
||||
safe_remove "$plist_file" true > /dev/null 2>&1 || true
|
||||
((broken_count++))
|
||||
done < <(command find "$prefs_dir" -maxdepth 1 -name "*.plist" -type f 2> /dev/null || true)
|
||||
|
||||
# Check ByHost preferences with timeout protection
|
||||
# Check ByHost preferences.
|
||||
local byhost_dir="$prefs_dir/ByHost"
|
||||
if [[ -d "$byhost_dir" ]]; then
|
||||
while IFS= read -r plist_file; do
|
||||
|
||||
@@ -3,16 +3,12 @@
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration constants
|
||||
# MOLE_TM_THIN_TIMEOUT: Max seconds to wait for tmutil thinning (default: 180)
|
||||
# MOLE_TM_THIN_VALUE: Bytes to thin for local snapshots (default: 9999999999)
|
||||
# MOLE_MAIL_DOWNLOADS_MIN_KB: Minimum size in KB before cleaning Mail attachments (default: 5120)
|
||||
# MOLE_MAIL_AGE_DAYS: Minimum age in days for Mail attachments to be cleaned (default: 30)
|
||||
# Config constants (override via env).
|
||||
readonly MOLE_TM_THIN_TIMEOUT=180
|
||||
readonly MOLE_TM_THIN_VALUE=9999999999
|
||||
readonly MOLE_SQLITE_MAX_SIZE=104857600 # 100MB
|
||||
|
||||
# Helper function to get appropriate icon and color for dry-run mode
|
||||
# Dry-run aware output.
|
||||
opt_msg() {
|
||||
local message="$1"
|
||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||
@@ -92,7 +88,6 @@ is_memory_pressure_high() {
|
||||
}
|
||||
|
||||
flush_dns_cache() {
|
||||
# Skip actual flush in dry-run mode
|
||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||
MOLE_DNS_FLUSHED=1
|
||||
return 0
|
||||
@@ -105,7 +100,7 @@ flush_dns_cache() {
|
||||
return 1
|
||||
}
|
||||
|
||||
# Rebuild databases and flush caches
|
||||
# Basic system maintenance.
|
||||
opt_system_maintenance() {
|
||||
if flush_dns_cache; then
|
||||
opt_msg "DNS cache flushed"
|
||||
@@ -120,10 +115,8 @@ opt_system_maintenance() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Refresh Finder caches (QuickLook and icon services)
|
||||
# Note: Safari caches are cleaned separately in clean/user.sh, so excluded here
|
||||
# Refresh Finder caches (QuickLook/icon services).
|
||||
opt_cache_refresh() {
|
||||
# Skip qlmanage commands in dry-run mode
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
qlmanage -r cache > /dev/null 2>&1 || true
|
||||
qlmanage -r > /dev/null 2>&1 || true
|
||||
@@ -151,7 +144,7 @@ opt_cache_refresh() {
|
||||
|
||||
# Removed: opt_radio_refresh - Interrupts active user connections (WiFi, Bluetooth), degrading UX
|
||||
|
||||
# Saved state: remove OLD app saved states (7+ days)
|
||||
# Old saved states cleanup.
|
||||
opt_saved_state_cleanup() {
|
||||
local state_dir="$HOME/Library/Saved Application State"
|
||||
|
||||
@@ -193,7 +186,7 @@ opt_fix_broken_configs() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Network cache optimization
|
||||
# DNS cache refresh.
|
||||
opt_network_optimization() {
|
||||
if [[ "${MOLE_DNS_FLUSHED:-0}" == "1" ]]; then
|
||||
opt_msg "DNS cache already refreshed"
|
||||
@@ -209,8 +202,7 @@ opt_network_optimization() {
|
||||
fi
|
||||
}
|
||||
|
||||
# SQLite database vacuum optimization
|
||||
# Compresses and optimizes SQLite databases for Mail, Messages, Safari
|
||||
# SQLite vacuum for Mail/Messages/Safari (safety checks applied).
|
||||
opt_sqlite_vacuum() {
|
||||
if ! command -v sqlite3 > /dev/null 2>&1; then
|
||||
echo -e " ${GRAY}-${NC} Database optimization already optimal (sqlite3 unavailable)"
|
||||
@@ -254,15 +246,13 @@ opt_sqlite_vacuum() {
|
||||
[[ ! -f "$db_file" ]] && continue
|
||||
[[ "$db_file" == *"-wal" || "$db_file" == *"-shm" ]] && continue
|
||||
|
||||
# Skip if protected
|
||||
should_protect_path "$db_file" && continue
|
||||
|
||||
# Verify it's a SQLite database
|
||||
if ! file "$db_file" 2> /dev/null | grep -q "SQLite"; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Safety check 1: Skip large databases (>100MB) to avoid timeouts
|
||||
# Skip large DBs (>100MB).
|
||||
local file_size
|
||||
file_size=$(get_file_size "$db_file")
|
||||
if [[ "$file_size" -gt "$MOLE_SQLITE_MAX_SIZE" ]]; then
|
||||
@@ -270,7 +260,7 @@ opt_sqlite_vacuum() {
|
||||
continue
|
||||
fi
|
||||
|
||||
# Safety check 2: Skip if freelist is tiny (already compact)
|
||||
# Skip if freelist is tiny (already compact).
|
||||
local page_info=""
|
||||
page_info=$(run_with_timeout 5 sqlite3 "$db_file" "PRAGMA page_count; PRAGMA freelist_count;" 2> /dev/null || echo "")
|
||||
local page_count=""
|
||||
@@ -284,7 +274,7 @@ opt_sqlite_vacuum() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Safety check 3: Verify database integrity before VACUUM
|
||||
# Verify integrity before VACUUM.
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
local integrity_check=""
|
||||
set +e
|
||||
@@ -292,14 +282,12 @@ opt_sqlite_vacuum() {
|
||||
local integrity_status=$?
|
||||
set -e
|
||||
|
||||
# Skip if integrity check failed or database is corrupted
|
||||
if [[ $integrity_status -ne 0 ]] || ! echo "$integrity_check" | grep -q "ok"; then
|
||||
((skipped++))
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
|
||||
# Try to vacuum (skip in dry-run mode)
|
||||
local exit_code=0
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
set +e
|
||||
@@ -315,7 +303,6 @@ opt_sqlite_vacuum() {
|
||||
((failed++))
|
||||
fi
|
||||
else
|
||||
# In dry-run mode, just count the database
|
||||
((vacuumed++))
|
||||
fi
|
||||
done < <(compgen -G "$pattern" || true)
|
||||
@@ -346,8 +333,7 @@ opt_sqlite_vacuum() {
|
||||
fi
|
||||
}
|
||||
|
||||
# LaunchServices database rebuild
|
||||
# Fixes "Open with" menu issues, duplicate apps, broken file associations
|
||||
# LaunchServices rebuild ("Open with" issues).
|
||||
opt_launch_services_rebuild() {
|
||||
if [[ -t 1 ]]; then
|
||||
start_inline_spinner ""
|
||||
@@ -358,7 +344,6 @@ opt_launch_services_rebuild() {
|
||||
if [[ -f "$lsregister" ]]; then
|
||||
local success=0
|
||||
|
||||
# Skip actual rebuild in dry-run mode
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
set +e
|
||||
"$lsregister" -r -domain local -domain user -domain system > /dev/null 2>&1
|
||||
@@ -369,7 +354,7 @@ opt_launch_services_rebuild() {
|
||||
fi
|
||||
set -e
|
||||
else
|
||||
success=0 # Assume success in dry-run mode
|
||||
success=0
|
||||
fi
|
||||
|
||||
if [[ -t 1 ]]; then
|
||||
@@ -390,18 +375,16 @@ opt_launch_services_rebuild() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Font cache rebuild
|
||||
# Fixes font rendering issues, missing fonts, and character display problems
|
||||
# Font cache rebuild.
|
||||
opt_font_cache_rebuild() {
|
||||
local success=false
|
||||
|
||||
# Skip actual font cache removal in dry-run mode
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
if sudo atsutil databases -remove > /dev/null 2>&1; then
|
||||
success=true
|
||||
fi
|
||||
else
|
||||
success=true # Assume success in dry-run mode
|
||||
success=true
|
||||
fi
|
||||
|
||||
if [[ "$success" == "true" ]]; then
|
||||
@@ -417,8 +400,7 @@ opt_font_cache_rebuild() {
|
||||
# - opt_dyld_cache_update: Low benefit, time-consuming, auto-managed by macOS
|
||||
# - opt_system_services_refresh: Risk of data loss when killing system services
|
||||
|
||||
# Memory pressure relief
|
||||
# Clears inactive memory and disk cache to improve system responsiveness
|
||||
# Memory pressure relief.
|
||||
opt_memory_pressure_relief() {
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
if ! is_memory_pressure_high; then
|
||||
@@ -438,8 +420,7 @@ opt_memory_pressure_relief() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Network stack optimization
|
||||
# Flushes routing table and ARP cache to resolve network issues
|
||||
# Network stack reset (route + ARP).
|
||||
opt_network_stack_optimize() {
|
||||
local route_flushed="false"
|
||||
local arp_flushed="false"
|
||||
@@ -460,12 +441,10 @@ opt_network_stack_optimize() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Flush routing table
|
||||
if sudo route -n flush > /dev/null 2>&1; then
|
||||
route_flushed="true"
|
||||
fi
|
||||
|
||||
# Clear ARP cache
|
||||
if sudo arp -a -d > /dev/null 2>&1; then
|
||||
arp_flushed="true"
|
||||
fi
|
||||
@@ -487,8 +466,7 @@ opt_network_stack_optimize() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Disk permissions repair
|
||||
# Fixes user home directory permission issues
|
||||
# User directory permissions repair.
|
||||
opt_disk_permissions_repair() {
|
||||
local user_id
|
||||
user_id=$(id -u)
|
||||
@@ -524,11 +502,7 @@ opt_disk_permissions_repair() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Bluetooth module reset
|
||||
# Resets Bluetooth daemon to fix connectivity issues
|
||||
# Intelligently detects Bluetooth audio usage:
|
||||
# 1. Checks if default audio output is Bluetooth (precise)
|
||||
# 2. Falls back to Bluetooth + media app detection (compatibility)
|
||||
# Bluetooth reset (skip if HID/audio active).
|
||||
opt_bluetooth_reset() {
|
||||
local spinner_started="false"
|
||||
if [[ -t 1 ]]; then
|
||||
@@ -545,26 +519,20 @@ opt_bluetooth_reset() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check if any audio is playing through Bluetooth
|
||||
local bt_audio_active=false
|
||||
|
||||
# Method 1: Check if default audio output is Bluetooth (precise)
|
||||
local audio_info
|
||||
audio_info=$(system_profiler SPAudioDataType 2> /dev/null || echo "")
|
||||
|
||||
# Extract default output device information
|
||||
local default_output
|
||||
default_output=$(echo "$audio_info" | awk '/Default Output Device: Yes/,/^$/' 2> /dev/null || echo "")
|
||||
|
||||
# Check if transport type is Bluetooth
|
||||
if echo "$default_output" | grep -qi "Transport:.*Bluetooth"; then
|
||||
bt_audio_active=true
|
||||
fi
|
||||
|
||||
# Method 2: Fallback - Bluetooth connected + media apps running (compatibility)
|
||||
if [[ "$bt_audio_active" == "false" ]]; then
|
||||
if system_profiler SPBluetoothDataType 2> /dev/null | grep -q "Connected: Yes"; then
|
||||
# Extended media apps list for broader coverage
|
||||
local -a media_apps=("Music" "Spotify" "VLC" "QuickTime Player" "TV" "Podcasts" "Safari" "Google Chrome" "Chrome" "Firefox" "Arc" "IINA" "mpv")
|
||||
for app in "${media_apps[@]}"; do
|
||||
if pgrep -x "$app" > /dev/null 2>&1; then
|
||||
@@ -583,7 +551,6 @@ opt_bluetooth_reset() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Safe to reset Bluetooth
|
||||
if sudo pkill -TERM bluetoothd > /dev/null 2>&1; then
|
||||
sleep 1
|
||||
if pgrep -x bluetoothd > /dev/null 2>&1; then
|
||||
@@ -609,11 +576,8 @@ opt_bluetooth_reset() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Spotlight index optimization
|
||||
# Rebuilds Spotlight index if search is slow or results are inaccurate
|
||||
# Only runs if index is actually problematic
|
||||
# Spotlight index check/rebuild (only if slow).
|
||||
opt_spotlight_index_optimize() {
|
||||
# Check if Spotlight indexing is disabled
|
||||
local spotlight_status
|
||||
spotlight_status=$(mdutil -s / 2> /dev/null || echo "")
|
||||
|
||||
@@ -622,9 +586,7 @@ opt_spotlight_index_optimize() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check if indexing is currently running
|
||||
if echo "$spotlight_status" | grep -qi "Indexing enabled" && ! echo "$spotlight_status" | grep -qi "Indexing and searching disabled"; then
|
||||
# Check index health by testing search speed twice
|
||||
local slow_count=0
|
||||
local test_start test_end test_duration
|
||||
for _ in 1 2; do
|
||||
@@ -663,13 +625,11 @@ opt_spotlight_index_optimize() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Dock cache refresh
|
||||
# Fixes broken icons, duplicate items, and visual glitches in the Dock
|
||||
# Dock cache refresh.
|
||||
opt_dock_refresh() {
|
||||
local dock_support="$HOME/Library/Application Support/Dock"
|
||||
local refreshed=false
|
||||
|
||||
# Remove Dock database files (icons, positions, etc.)
|
||||
if [[ -d "$dock_support" ]]; then
|
||||
while IFS= read -r db_file; do
|
||||
if [[ -f "$db_file" ]]; then
|
||||
@@ -678,14 +638,11 @@ opt_dock_refresh() {
|
||||
done < <(find "$dock_support" -name "*.db" -type f 2> /dev/null || true)
|
||||
fi
|
||||
|
||||
# Also clear Dock plist cache
|
||||
local dock_plist="$HOME/Library/Preferences/com.apple.dock.plist"
|
||||
if [[ -f "$dock_plist" ]]; then
|
||||
# Just touch to invalidate cache, don't delete (preserves user settings)
|
||||
touch "$dock_plist" 2> /dev/null || true
|
||||
fi
|
||||
|
||||
# Restart Dock to apply changes (skip in dry-run mode)
|
||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||
killall Dock 2> /dev/null || true
|
||||
fi
|
||||
@@ -696,7 +653,7 @@ opt_dock_refresh() {
|
||||
opt_msg "Dock refreshed"
|
||||
}
|
||||
|
||||
# Execute optimization by action name
|
||||
# Dispatch optimization by action name.
|
||||
execute_optimization() {
|
||||
local action="$1"
|
||||
local path="${2:-}"
|
||||
|
||||
@@ -2,18 +2,13 @@
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Ensure common.sh is loaded
|
||||
# Ensure common.sh is loaded.
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
[[ -z "${MOLE_COMMON_LOADED:-}" ]] && source "$SCRIPT_DIR/lib/core/common.sh"
|
||||
|
||||
# Batch uninstall functionality with minimal confirmations
|
||||
# Replaces the overly verbose individual confirmation approach
|
||||
# Batch uninstall with a single confirmation.
|
||||
|
||||
# ============================================================================
|
||||
# Configuration: User Data Detection Patterns
|
||||
# ============================================================================
|
||||
# Directories that typically contain user-customized configurations, themes,
|
||||
# or personal data that users might want to backup before uninstalling
|
||||
# User data detection patterns (prompt user to backup if found).
|
||||
readonly SENSITIVE_DATA_PATTERNS=(
|
||||
"\.warp" # Warp terminal configs/themes
|
||||
"/\.config/" # Standard Unix config directory
|
||||
@@ -26,24 +21,20 @@ readonly SENSITIVE_DATA_PATTERNS=(
|
||||
"/\.gnupg/" # GPG keys (critical)
|
||||
)
|
||||
|
||||
# Join patterns into a single regex for grep
|
||||
# Join patterns into a single regex for grep.
|
||||
SENSITIVE_DATA_REGEX=$(
|
||||
IFS='|'
|
||||
echo "${SENSITIVE_DATA_PATTERNS[*]}"
|
||||
)
|
||||
|
||||
# Decode and validate base64 encoded file list
|
||||
# Returns decoded string if valid, empty string otherwise
|
||||
# Decode and validate base64 file list (safe for set -e).
|
||||
decode_file_list() {
|
||||
local encoded="$1"
|
||||
local app_name="$2"
|
||||
local decoded
|
||||
|
||||
# Decode base64 data (macOS uses -D, GNU uses -d)
|
||||
# Try macOS format first, then GNU format for compatibility
|
||||
# IMPORTANT: Always return 0 to prevent set -e from terminating the script
|
||||
# macOS uses -D, GNU uses -d. Always return 0 for set -e safety.
|
||||
if ! decoded=$(printf '%s' "$encoded" | base64 -D 2> /dev/null); then
|
||||
# Fallback to GNU base64 format
|
||||
if ! decoded=$(printf '%s' "$encoded" | base64 -d 2> /dev/null); then
|
||||
log_error "Failed to decode file list for $app_name" >&2
|
||||
echo ""
|
||||
@@ -51,14 +42,12 @@ decode_file_list() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Validate decoded data doesn't contain null bytes
|
||||
if [[ "$decoded" =~ $'\0' ]]; then
|
||||
log_warning "File list for $app_name contains null bytes, rejecting" >&2
|
||||
echo ""
|
||||
return 0 # Return success with empty string
|
||||
fi
|
||||
|
||||
# Validate paths look reasonable (each line should be a path or empty)
|
||||
while IFS= read -r line; do
|
||||
if [[ -n "$line" && ! "$line" =~ ^/ ]]; then
|
||||
log_warning "Invalid path in file list for $app_name: $line" >&2
|
||||
@@ -70,24 +59,21 @@ decode_file_list() {
|
||||
echo "$decoded"
|
||||
return 0
|
||||
}
|
||||
# Note: find_app_files() and calculate_total_size() functions now in lib/core/common.sh
|
||||
# Note: find_app_files() and calculate_total_size() are in lib/core/common.sh.
|
||||
|
||||
# Stop Launch Agents and Daemons for an app
|
||||
# Args: $1 = bundle_id, $2 = has_system_files (true/false)
|
||||
# Stop Launch Agents/Daemons for an app.
|
||||
stop_launch_services() {
|
||||
local bundle_id="$1"
|
||||
local has_system_files="${2:-false}"
|
||||
|
||||
[[ -z "$bundle_id" || "$bundle_id" == "unknown" ]] && return 0
|
||||
|
||||
# User-level Launch Agents
|
||||
if [[ -d ~/Library/LaunchAgents ]]; then
|
||||
while IFS= read -r -d '' plist; do
|
||||
launchctl unload "$plist" 2> /dev/null || true
|
||||
done < <(find ~/Library/LaunchAgents -maxdepth 1 -name "${bundle_id}*.plist" -print0 2> /dev/null)
|
||||
fi
|
||||
|
||||
# System-level services (requires sudo)
|
||||
if [[ "$has_system_files" == "true" ]]; then
|
||||
if [[ -d /Library/LaunchAgents ]]; then
|
||||
while IFS= read -r -d '' plist; do
|
||||
@@ -102,9 +88,7 @@ stop_launch_services() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Remove a list of files (handles both regular files and symlinks)
|
||||
# Args: $1 = file_list (newline-separated), $2 = use_sudo (true/false)
|
||||
# Returns: number of files removed
|
||||
# Remove files (handles symlinks, optional sudo).
|
||||
remove_file_list() {
|
||||
local file_list="$1"
|
||||
local use_sudo="${2:-false}"
|
||||
@@ -114,14 +98,12 @@ remove_file_list() {
|
||||
[[ -n "$file" && -e "$file" ]] || continue
|
||||
|
||||
if [[ -L "$file" ]]; then
|
||||
# Symlink: use direct rm
|
||||
if [[ "$use_sudo" == "true" ]]; then
|
||||
sudo rm "$file" 2> /dev/null && ((count++)) || true
|
||||
else
|
||||
rm "$file" 2> /dev/null && ((count++)) || true
|
||||
fi
|
||||
else
|
||||
# Regular file/directory: use safe_remove
|
||||
if [[ "$use_sudo" == "true" ]]; then
|
||||
safe_sudo_remove "$file" && ((count++)) || true
|
||||
else
|
||||
@@ -133,8 +115,7 @@ remove_file_list() {
|
||||
echo "$count"
|
||||
}
|
||||
|
||||
# Batch uninstall with single confirmation
|
||||
# Globals: selected_apps (read) - array of selected applications
|
||||
# Batch uninstall with single confirmation.
|
||||
batch_uninstall_applications() {
|
||||
local total_size_freed=0
|
||||
|
||||
@@ -144,19 +125,18 @@ batch_uninstall_applications() {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Pre-process: Check for running apps and calculate total impact
|
||||
# Pre-scan: running apps, sudo needs, size.
|
||||
local -a running_apps=()
|
||||
local -a sudo_apps=()
|
||||
local total_estimated_size=0
|
||||
local -a app_details=()
|
||||
|
||||
# Analyze selected apps with progress indicator
|
||||
if [[ -t 1 ]]; then start_inline_spinner "Scanning files..."; fi
|
||||
for selected_app in "${selected_apps[@]}"; do
|
||||
[[ -z "$selected_app" ]] && continue
|
||||
IFS='|' read -r _ app_path app_name bundle_id _ _ <<< "$selected_app"
|
||||
|
||||
# Check if app is running using executable name from bundle
|
||||
# Check running app by bundle executable if available.
|
||||
local exec_name=""
|
||||
if [[ -e "$app_path/Contents/Info.plist" ]]; then
|
||||
exec_name=$(defaults read "$app_path/Contents/Info.plist" CFBundleExecutable 2> /dev/null || echo "")
|
||||
@@ -166,11 +146,7 @@ batch_uninstall_applications() {
|
||||
running_apps+=("$app_name")
|
||||
fi
|
||||
|
||||
# Check if app requires sudo to delete (either app bundle or system files)
|
||||
# Need sudo if:
|
||||
# 1. Parent directory is not writable (may be owned by another user or root)
|
||||
# 2. App owner is root
|
||||
# 3. App owner is different from current user
|
||||
# Sudo needed if bundle owner/dir is not writable or system files exist.
|
||||
local needs_sudo=false
|
||||
local app_owner=$(get_file_owner "$app_path")
|
||||
local current_user=$(whoami)
|
||||
@@ -180,11 +156,11 @@ batch_uninstall_applications() {
|
||||
needs_sudo=true
|
||||
fi
|
||||
|
||||
# Calculate size for summary (including system files)
|
||||
# Size estimate includes related and system files.
|
||||
local app_size_kb=$(get_path_size_kb "$app_path")
|
||||
local related_files=$(find_app_files "$bundle_id" "$app_name")
|
||||
local related_size_kb=$(calculate_total_size "$related_files")
|
||||
# system_files is a newline-separated string, not an array
|
||||
# system_files is a newline-separated string, not an array.
|
||||
# shellcheck disable=SC2178,SC2128
|
||||
local system_files=$(find_app_system_files "$bundle_id" "$app_name")
|
||||
# shellcheck disable=SC2128
|
||||
@@ -192,7 +168,6 @@ batch_uninstall_applications() {
|
||||
local total_kb=$((app_size_kb + related_size_kb + system_size_kb))
|
||||
((total_estimated_size += total_kb))
|
||||
|
||||
# Check if system files require sudo
|
||||
# shellcheck disable=SC2128
|
||||
if [[ -n "$system_files" ]]; then
|
||||
needs_sudo=true
|
||||
@@ -202,33 +177,28 @@ batch_uninstall_applications() {
|
||||
sudo_apps+=("$app_name")
|
||||
fi
|
||||
|
||||
# Check for sensitive user data (performance optimization: do this once)
|
||||
# Check for sensitive user data once.
|
||||
local has_sensitive_data="false"
|
||||
if [[ -n "$related_files" ]] && echo "$related_files" | grep -qE "$SENSITIVE_DATA_REGEX"; then
|
||||
has_sensitive_data="true"
|
||||
fi
|
||||
|
||||
# Store details for later use
|
||||
# Base64 encode file lists to handle multi-line data safely (single line)
|
||||
# Store details for later use (base64 keeps lists on one line).
|
||||
local encoded_files
|
||||
encoded_files=$(printf '%s' "$related_files" | base64 | tr -d '\n')
|
||||
local encoded_system_files
|
||||
encoded_system_files=$(printf '%s' "$system_files" | base64 | tr -d '\n')
|
||||
# Store needs_sudo to avoid recalculating during deletion phase
|
||||
app_details+=("$app_name|$app_path|$bundle_id|$total_kb|$encoded_files|$encoded_system_files|$has_sensitive_data|$needs_sudo")
|
||||
done
|
||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||
|
||||
# Format size display (convert KB to bytes for bytes_to_human())
|
||||
local size_display=$(bytes_to_human "$((total_estimated_size * 1024))")
|
||||
|
||||
# Display detailed file list for each app before confirmation
|
||||
echo ""
|
||||
echo -e "${PURPLE_BOLD}Files to be removed:${NC}"
|
||||
echo ""
|
||||
|
||||
# Check for apps with user data that might need backup
|
||||
# Performance optimization: use pre-calculated flags from app_details
|
||||
# Warn if user data is detected.
|
||||
local has_user_data=false
|
||||
for detail in "${app_details[@]}"; do
|
||||
IFS='|' read -r _ _ _ _ _ _ has_sensitive_data <<< "$detail"
|
||||
@@ -252,7 +222,7 @@ batch_uninstall_applications() {
|
||||
echo -e "${BLUE}${ICON_CONFIRM}${NC} ${app_name} ${GRAY}(${app_size_display})${NC}"
|
||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} ${app_path/$HOME/~}"
|
||||
|
||||
# Show related files (limit to 5 most important ones for brevity)
|
||||
# Show related files (limit to 5).
|
||||
local file_count=0
|
||||
local max_files=5
|
||||
while IFS= read -r file; do
|
||||
@@ -264,7 +234,7 @@ batch_uninstall_applications() {
|
||||
fi
|
||||
done <<< "$related_files"
|
||||
|
||||
# Show system files
|
||||
# Show system files (limit to 5).
|
||||
local sys_file_count=0
|
||||
while IFS= read -r file; do
|
||||
if [[ -n "$file" && -e "$file" ]]; then
|
||||
@@ -275,7 +245,6 @@ batch_uninstall_applications() {
|
||||
fi
|
||||
done <<< "$system_files"
|
||||
|
||||
# Show count of remaining files if truncated
|
||||
local total_hidden=$((file_count > max_files ? file_count - max_files : 0))
|
||||
((total_hidden += sys_file_count > max_files ? sys_file_count - max_files : 0))
|
||||
if [[ $total_hidden -gt 0 ]]; then
|
||||
@@ -283,7 +252,7 @@ batch_uninstall_applications() {
|
||||
fi
|
||||
done
|
||||
|
||||
# Show summary and get batch confirmation first (before asking for password)
|
||||
# Confirmation before requesting sudo.
|
||||
local app_total=${#selected_apps[@]}
|
||||
local app_text="app"
|
||||
[[ $app_total -gt 1 ]] && app_text="apps"
|
||||
@@ -315,9 +284,8 @@ batch_uninstall_applications() {
|
||||
;;
|
||||
esac
|
||||
|
||||
# User confirmed, now request sudo access if needed
|
||||
# Request sudo if needed.
|
||||
if [[ ${#sudo_apps[@]} -gt 0 ]]; then
|
||||
# Check if sudo is already cached
|
||||
if ! sudo -n true 2> /dev/null; then
|
||||
if ! request_sudo_access "Admin required for system apps: ${sudo_apps[*]}"; then
|
||||
echo ""
|
||||
@@ -325,10 +293,9 @@ batch_uninstall_applications() {
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
# Start sudo keepalive with robust parent checking
|
||||
# Keep sudo alive during uninstall.
|
||||
parent_pid=$$
|
||||
(while true; do
|
||||
# Check if parent process still exists first
|
||||
if ! kill -0 "$parent_pid" 2> /dev/null; then
|
||||
exit 0
|
||||
fi
|
||||
@@ -340,10 +307,7 @@ batch_uninstall_applications() {
|
||||
|
||||
if [[ -t 1 ]]; then start_inline_spinner "Uninstalling apps..."; fi
|
||||
|
||||
# Force quit running apps first (batch)
|
||||
# Note: Apps are already killed in the individual uninstall loop below with app_path for precise matching
|
||||
|
||||
# Perform uninstallations (silent mode, show results at end)
|
||||
# Perform uninstallations (silent mode, show results at end).
|
||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||
local success_count=0 failed_count=0
|
||||
local -a failed_items=()
|
||||
@@ -354,23 +318,19 @@ batch_uninstall_applications() {
|
||||
local system_files=$(decode_file_list "$encoded_system_files" "$app_name")
|
||||
local reason=""
|
||||
|
||||
# Note: needs_sudo is already calculated during scanning phase (performance optimization)
|
||||
|
||||
# Stop Launch Agents and Daemons before removal
|
||||
# Stop Launch Agents/Daemons before removal.
|
||||
local has_system_files="false"
|
||||
[[ -n "$system_files" ]] && has_system_files="true"
|
||||
stop_launch_services "$bundle_id" "$has_system_files"
|
||||
|
||||
# Force quit app if still running
|
||||
if ! force_kill_app "$app_name" "$app_path"; then
|
||||
reason="still running"
|
||||
fi
|
||||
|
||||
# Remove the application only if not running
|
||||
# Remove the application only if not running.
|
||||
if [[ -z "$reason" ]]; then
|
||||
if [[ "$needs_sudo" == true ]]; then
|
||||
if ! safe_sudo_remove "$app_path"; then
|
||||
# Determine specific failure reason (only fetch owner info when needed)
|
||||
local app_owner=$(get_file_owner "$app_path")
|
||||
local current_user=$(whoami)
|
||||
if [[ -n "$app_owner" && "$app_owner" != "$current_user" && "$app_owner" != "root" ]]; then
|
||||
@@ -384,25 +344,18 @@ batch_uninstall_applications() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Remove related files if app removal succeeded
|
||||
# Remove related files if app removal succeeded.
|
||||
if [[ -z "$reason" ]]; then
|
||||
# Remove user-level files
|
||||
remove_file_list "$related_files" "false" > /dev/null
|
||||
# Remove system-level files (requires sudo)
|
||||
remove_file_list "$system_files" "true" > /dev/null
|
||||
|
||||
# Clean up macOS defaults (preference domain)
|
||||
# This removes configuration data stored in the macOS defaults system
|
||||
# Note: This complements plist file deletion by clearing cached preferences
|
||||
# Clean up macOS defaults (preference domains).
|
||||
if [[ -n "$bundle_id" && "$bundle_id" != "unknown" ]]; then
|
||||
# 1. Standard defaults domain cleanup
|
||||
if defaults read "$bundle_id" &> /dev/null; then
|
||||
defaults delete "$bundle_id" 2> /dev/null || true
|
||||
fi
|
||||
|
||||
# 2. Clean up ByHost preferences (machine-specific configs)
|
||||
# These are often missed by standard cleanup tools
|
||||
# Format: ~/Library/Preferences/ByHost/com.app.id.XXXX.plist
|
||||
# ByHost preferences (machine-specific).
|
||||
if [[ -d ~/Library/Preferences/ByHost ]]; then
|
||||
find ~/Library/Preferences/ByHost -maxdepth 1 -name "${bundle_id}.*.plist" -delete 2> /dev/null || true
|
||||
fi
|
||||
@@ -435,7 +388,7 @@ batch_uninstall_applications() {
|
||||
success_line+=", freed ${GREEN}${freed_display}${NC}"
|
||||
fi
|
||||
|
||||
# Format app list with max 3 per line
|
||||
# Format app list with max 3 per line.
|
||||
if [[ -n "$success_list" ]]; then
|
||||
local idx=0
|
||||
local is_first_line=true
|
||||
@@ -445,25 +398,20 @@ batch_uninstall_applications() {
|
||||
local display_item="${GREEN}${app_name}${NC}"
|
||||
|
||||
if ((idx % 3 == 0)); then
|
||||
# Start new line
|
||||
if [[ -n "$current_line" ]]; then
|
||||
summary_details+=("$current_line")
|
||||
fi
|
||||
if [[ "$is_first_line" == true ]]; then
|
||||
# First line: append to success_line
|
||||
current_line="${success_line}: $display_item"
|
||||
is_first_line=false
|
||||
else
|
||||
# Subsequent lines: just the apps
|
||||
current_line="$display_item"
|
||||
fi
|
||||
else
|
||||
# Add to current line
|
||||
current_line="$current_line, $display_item"
|
||||
fi
|
||||
((idx++))
|
||||
done
|
||||
# Add the last line
|
||||
if [[ -n "$current_line" ]]; then
|
||||
summary_details+=("$current_line")
|
||||
fi
|
||||
@@ -509,12 +457,11 @@ batch_uninstall_applications() {
|
||||
print_summary_block "$title" "${summary_details[@]}"
|
||||
printf '\n'
|
||||
|
||||
# Clean up Dock entries for uninstalled apps
|
||||
# Clean up Dock entries for uninstalled apps.
|
||||
if [[ $success_count -gt 0 ]]; then
|
||||
local -a removed_paths=()
|
||||
for detail in "${app_details[@]}"; do
|
||||
IFS='|' read -r app_name app_path _ _ _ _ <<< "$detail"
|
||||
# Check if this app was successfully removed
|
||||
for success_name in "${success_items[@]}"; do
|
||||
if [[ "$success_name" == "$app_name" ]]; then
|
||||
removed_paths+=("$app_path")
|
||||
@@ -527,14 +474,14 @@ batch_uninstall_applications() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Clean up sudo keepalive if it was started
|
||||
# Clean up sudo keepalive if it was started.
|
||||
if [[ -n "${sudo_keepalive_pid:-}" ]]; then
|
||||
kill "$sudo_keepalive_pid" 2> /dev/null || true
|
||||
wait "$sudo_keepalive_pid" 2> /dev/null || true
|
||||
sudo_keepalive_pid=""
|
||||
fi
|
||||
|
||||
# Invalidate cache if any apps were successfully uninstalled
|
||||
# Invalidate cache if any apps were successfully uninstalled.
|
||||
if [[ $success_count -gt 0 ]]; then
|
||||
local cache_file="$HOME/.cache/mole/app_scan_cache"
|
||||
rm -f "$cache_file" 2> /dev/null || true
|
||||
|
||||
100
mole
100
mole
@@ -1,83 +1,58 @@
|
||||
#!/bin/bash
|
||||
# Mole - Main Entry Point
|
||||
# A comprehensive macOS maintenance tool
|
||||
#
|
||||
# Clean - Remove junk files and optimize system
|
||||
# Uninstall - Remove applications completely
|
||||
# Analyze - Interactive disk space explorer
|
||||
#
|
||||
# Usage:
|
||||
# ./mole # Interactive main menu
|
||||
# ./mole clean # Direct clean mode
|
||||
# ./mole uninstall # Direct uninstall mode
|
||||
# ./mole analyze # Disk space explorer
|
||||
# ./mole --help # Show help
|
||||
# Mole - Main CLI entrypoint.
|
||||
# Routes subcommands and interactive menu.
|
||||
# Handles update/remove flows.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Get script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Source common functions
|
||||
source "$SCRIPT_DIR/lib/core/common.sh"
|
||||
|
||||
# Set up cleanup trap for temporary files
|
||||
trap cleanup_temp_files EXIT INT TERM
|
||||
|
||||
# Version info
|
||||
# Version and update helpers
|
||||
VERSION="1.17.0"
|
||||
MOLE_TAGLINE="Deep clean and optimize your Mac."
|
||||
|
||||
# Check TouchID configuration
|
||||
is_touchid_configured() {
|
||||
local pam_sudo_file="/etc/pam.d/sudo"
|
||||
[[ -f "$pam_sudo_file" ]] && grep -q "pam_tid.so" "$pam_sudo_file" 2> /dev/null
|
||||
}
|
||||
|
||||
# Get latest version from remote repository
|
||||
get_latest_version() {
|
||||
curl -fsSL --connect-timeout 2 --max-time 3 -H "Cache-Control: no-cache" \
|
||||
"https://raw.githubusercontent.com/tw93/mole/main/mole" 2> /dev/null |
|
||||
grep '^VERSION=' | head -1 | sed 's/VERSION="\(.*\)"/\1/'
|
||||
}
|
||||
|
||||
# Get latest version from GitHub API (works for both Homebrew and manual installations)
|
||||
get_latest_version_from_github() {
|
||||
local version
|
||||
version=$(curl -fsSL --connect-timeout 2 --max-time 3 \
|
||||
"https://api.github.com/repos/tw93/mole/releases/latest" 2> /dev/null |
|
||||
grep '"tag_name"' | head -1 | sed -E 's/.*"([^"]+)".*/\1/')
|
||||
# Remove 'v' or 'V' prefix if present
|
||||
version="${version#v}"
|
||||
version="${version#V}"
|
||||
echo "$version"
|
||||
}
|
||||
|
||||
# Check if installed via Homebrew
|
||||
# Install detection (Homebrew vs manual).
|
||||
is_homebrew_install() {
|
||||
# Fast path: check if mole binary is a Homebrew symlink
|
||||
local mole_path
|
||||
mole_path=$(command -v mole 2> /dev/null) || return 1
|
||||
|
||||
# Check if mole is a symlink pointing to Homebrew Cellar
|
||||
if [[ -L "$mole_path" ]] && readlink "$mole_path" | grep -q "Cellar/mole"; then
|
||||
# Symlink looks good, but verify brew actually manages it
|
||||
if command -v brew > /dev/null 2>&1; then
|
||||
# Use fast brew list check
|
||||
brew list --formula 2> /dev/null | grep -q "^mole$" && return 0
|
||||
else
|
||||
# brew not available - cannot update/remove via Homebrew
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Fallback: check common Homebrew paths and verify with Cellar
|
||||
if [[ -f "$mole_path" ]]; then
|
||||
case "$mole_path" in
|
||||
/opt/homebrew/bin/mole | /usr/local/bin/mole)
|
||||
# Verify Cellar directory exists
|
||||
if [[ -d /opt/homebrew/Cellar/mole ]] || [[ -d /usr/local/Cellar/mole ]]; then
|
||||
# Double-check with brew if available
|
||||
if command -v brew > /dev/null 2>&1; then
|
||||
brew list --formula 2> /dev/null | grep -q "^mole$" && return 0
|
||||
else
|
||||
@@ -88,7 +63,6 @@ is_homebrew_install() {
|
||||
esac
|
||||
fi
|
||||
|
||||
# Last resort: check custom Homebrew prefix
|
||||
if command -v brew > /dev/null 2>&1; then
|
||||
local brew_prefix
|
||||
brew_prefix=$(brew --prefix 2> /dev/null)
|
||||
@@ -100,22 +74,17 @@ is_homebrew_install() {
|
||||
return 1
|
||||
}
|
||||
|
||||
# Check for updates (non-blocking, always check in background)
|
||||
# Background update notice
|
||||
check_for_updates() {
|
||||
local msg_cache="$HOME/.cache/mole/update_message"
|
||||
ensure_user_dir "$(dirname "$msg_cache")"
|
||||
ensure_user_file "$msg_cache"
|
||||
|
||||
# Background version check
|
||||
# Always check in background, display result from previous check
|
||||
(
|
||||
local latest
|
||||
|
||||
# Use GitHub API for version check (works for both Homebrew and manual installs)
|
||||
# Try API first (faster and more reliable)
|
||||
latest=$(get_latest_version_from_github)
|
||||
if [[ -z "$latest" ]]; then
|
||||
# Fallback to parsing mole script from raw GitHub
|
||||
latest=$(get_latest_version)
|
||||
fi
|
||||
|
||||
@@ -128,7 +97,6 @@ check_for_updates() {
|
||||
disown 2> /dev/null || true
|
||||
}
|
||||
|
||||
# Show update notification if available
|
||||
show_update_notification() {
|
||||
local msg_cache="$HOME/.cache/mole/update_message"
|
||||
if [[ -f "$msg_cache" && -s "$msg_cache" ]]; then
|
||||
@@ -137,6 +105,7 @@ show_update_notification() {
|
||||
fi
|
||||
}
|
||||
|
||||
# UI helpers
|
||||
show_brand_banner() {
|
||||
cat << EOF
|
||||
${GREEN} __ __ _ ${NC}
|
||||
@@ -149,7 +118,6 @@ EOF
|
||||
}
|
||||
|
||||
animate_mole_intro() {
|
||||
# Non-interactive: skip animation
|
||||
if [[ ! -t 1 ]]; then
|
||||
return
|
||||
fi
|
||||
@@ -242,7 +210,6 @@ show_version() {
|
||||
local sip_status
|
||||
if command -v csrutil > /dev/null; then
|
||||
sip_status=$(csrutil status 2> /dev/null | grep -o "enabled\|disabled" || echo "Unknown")
|
||||
# Capitalize first letter
|
||||
sip_status="$(tr '[:lower:]' '[:upper:]' <<< "${sip_status:0:1}")${sip_status:1}"
|
||||
else
|
||||
sip_status="Unknown"
|
||||
@@ -295,22 +262,18 @@ show_help() {
|
||||
echo
|
||||
}
|
||||
|
||||
# Simple update function
|
||||
# Update flow (Homebrew or installer).
|
||||
update_mole() {
|
||||
# Set up cleanup trap for update process
|
||||
local update_interrupted=false
|
||||
trap 'update_interrupted=true; echo ""; exit 130' INT TERM
|
||||
|
||||
# Check if installed via Homebrew
|
||||
if is_homebrew_install; then
|
||||
update_via_homebrew "$VERSION"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check for updates
|
||||
local latest
|
||||
latest=$(get_latest_version_from_github)
|
||||
# Fallback to raw GitHub if API fails
|
||||
[[ -z "$latest" ]] && latest=$(get_latest_version)
|
||||
|
||||
if [[ -z "$latest" ]]; then
|
||||
@@ -327,7 +290,6 @@ update_mole() {
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Download and run installer with progress
|
||||
if [[ -t 1 ]]; then
|
||||
start_inline_spinner "Downloading latest version..."
|
||||
else
|
||||
@@ -341,7 +303,6 @@ update_mole() {
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Download installer with progress and better error handling
|
||||
local download_error=""
|
||||
if command -v curl > /dev/null 2>&1; then
|
||||
download_error=$(curl -fsSL --connect-timeout 10 --max-time 60 "$installer_url" -o "$tmp_installer" 2>&1) || {
|
||||
@@ -350,7 +311,6 @@ update_mole() {
|
||||
rm -f "$tmp_installer"
|
||||
log_error "Update failed (curl error: $curl_exit)"
|
||||
|
||||
# Provide helpful error messages based on curl exit codes
|
||||
case $curl_exit in
|
||||
6) echo -e "${YELLOW}Tip:${NC} Could not resolve host. Check DNS or network connection." ;;
|
||||
7) echo -e "${YELLOW}Tip:${NC} Failed to connect. Check network or proxy settings." ;;
|
||||
@@ -381,7 +341,6 @@ update_mole() {
|
||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||
chmod +x "$tmp_installer"
|
||||
|
||||
# Determine install directory
|
||||
local mole_path
|
||||
mole_path="$(command -v mole 2> /dev/null || echo "$0")"
|
||||
local install_dir
|
||||
@@ -408,7 +367,6 @@ update_mole() {
|
||||
echo "Installing update..."
|
||||
fi
|
||||
|
||||
# Helper function to process installer output
|
||||
process_install_output() {
|
||||
local output="$1"
|
||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||
@@ -419,7 +377,6 @@ update_mole() {
|
||||
printf '\n%s\n' "$filtered_output"
|
||||
fi
|
||||
|
||||
# Only show success message if installer didn't already do so
|
||||
if ! printf '%s\n' "$output" | grep -Eq "Updated to latest version|Already on latest version"; then
|
||||
local new_version
|
||||
new_version=$("$mole_path" --version 2> /dev/null | awk 'NF {print $NF}' || echo "")
|
||||
@@ -429,7 +386,6 @@ update_mole() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Run installer with visible output (but capture for error handling)
|
||||
local install_output
|
||||
local update_tag="V${latest#V}"
|
||||
local config_dir="${MOLE_CONFIG_DIR:-$SCRIPT_DIR}"
|
||||
@@ -439,7 +395,6 @@ update_mole() {
|
||||
if install_output=$(MOLE_VERSION="$update_tag" "$tmp_installer" --prefix "$install_dir" --config "$config_dir" --update 2>&1); then
|
||||
process_install_output "$install_output"
|
||||
else
|
||||
# Retry without --update flag
|
||||
if install_output=$(MOLE_VERSION="$update_tag" "$tmp_installer" --prefix "$install_dir" --config "$config_dir" 2>&1); then
|
||||
process_install_output "$install_output"
|
||||
else
|
||||
@@ -455,9 +410,8 @@ update_mole() {
|
||||
rm -f "$HOME/.cache/mole/update_message"
|
||||
}
|
||||
|
||||
# Remove Mole from system
|
||||
# Remove flow (Homebrew + manual + config/cache).
|
||||
remove_mole() {
|
||||
# Detect all installations with loading
|
||||
if [[ -t 1 ]]; then
|
||||
start_inline_spinner "Detecting Mole installations..."
|
||||
else
|
||||
@@ -484,22 +438,18 @@ remove_mole() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check Homebrew
|
||||
if [[ "$brew_has_mole" == "true" ]] || is_homebrew_install; then
|
||||
is_homebrew=true
|
||||
fi
|
||||
|
||||
# Find mole installations using which/command
|
||||
local found_mole
|
||||
found_mole=$(command -v mole 2> /dev/null || true)
|
||||
if [[ -n "$found_mole" && -f "$found_mole" ]]; then
|
||||
# Check if it's not a Homebrew symlink
|
||||
if [[ ! -L "$found_mole" ]] || ! readlink "$found_mole" | grep -q "Cellar/mole"; then
|
||||
manual_installs+=("$found_mole")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Also check common locations as fallback
|
||||
local -a fallback_paths=(
|
||||
"/usr/local/bin/mole"
|
||||
"$HOME/.local/bin/mole"
|
||||
@@ -508,21 +458,18 @@ remove_mole() {
|
||||
|
||||
for path in "${fallback_paths[@]}"; do
|
||||
if [[ -f "$path" && "$path" != "$found_mole" ]]; then
|
||||
# Check if it's not a Homebrew symlink
|
||||
if [[ ! -L "$path" ]] || ! readlink "$path" | grep -q "Cellar/mole"; then
|
||||
manual_installs+=("$path")
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Find mo alias
|
||||
local found_mo
|
||||
found_mo=$(command -v mo 2> /dev/null || true)
|
||||
if [[ -n "$found_mo" && -f "$found_mo" ]]; then
|
||||
alias_installs+=("$found_mo")
|
||||
fi
|
||||
|
||||
# Also check common locations for mo
|
||||
local -a alias_fallback=(
|
||||
"/usr/local/bin/mo"
|
||||
"$HOME/.local/bin/mo"
|
||||
@@ -541,7 +488,6 @@ remove_mole() {
|
||||
|
||||
printf '\n'
|
||||
|
||||
# Check if anything to remove
|
||||
local manual_count=${#manual_installs[@]}
|
||||
local alias_count=${#alias_installs[@]}
|
||||
if [[ "$is_homebrew" == "false" && ${manual_count:-0} -eq 0 && ${alias_count:-0} -eq 0 ]]; then
|
||||
@@ -549,7 +495,6 @@ remove_mole() {
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# List items for removal
|
||||
echo -e "${YELLOW}Remove Mole${NC} - will delete the following:"
|
||||
if [[ "$is_homebrew" == "true" ]]; then
|
||||
echo " - Mole via Homebrew"
|
||||
@@ -561,7 +506,6 @@ remove_mole() {
|
||||
echo " - ~/.cache/mole"
|
||||
echo -ne "${PURPLE}${ICON_ARROW}${NC} Press ${GREEN}Enter${NC} to confirm, ${GRAY}ESC${NC} to cancel: "
|
||||
|
||||
# Read single key
|
||||
IFS= read -r -s -n1 key || key=""
|
||||
drain_pending_input # Clean up any escape sequence remnants
|
||||
case "$key" in
|
||||
@@ -570,14 +514,12 @@ remove_mole() {
|
||||
;;
|
||||
"" | $'\n' | $'\r')
|
||||
printf "\r\033[K" # Clear the prompt line
|
||||
# Continue with removal
|
||||
;;
|
||||
*)
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
# Remove Homebrew installation
|
||||
local has_error=false
|
||||
if [[ "$is_homebrew" == "true" ]]; then
|
||||
if [[ -z "$brew_cmd" ]]; then
|
||||
@@ -598,18 +540,14 @@ remove_mole() {
|
||||
log_success "Mole uninstalled via Homebrew."
|
||||
fi
|
||||
fi
|
||||
# Remove manual installations
|
||||
if [[ ${manual_count:-0} -gt 0 ]]; then
|
||||
for install in "${manual_installs[@]}"; do
|
||||
if [[ -f "$install" ]]; then
|
||||
# Check if directory requires sudo (deletion is a directory operation)
|
||||
if [[ ! -w "$(dirname "$install")" ]]; then
|
||||
# Requires sudo
|
||||
if ! sudo rm -f "$install" 2> /dev/null; then
|
||||
has_error=true
|
||||
fi
|
||||
else
|
||||
# Regular user permission
|
||||
if ! rm -f "$install" 2> /dev/null; then
|
||||
has_error=true
|
||||
fi
|
||||
@@ -620,7 +558,6 @@ remove_mole() {
|
||||
if [[ ${alias_count:-0} -gt 0 ]]; then
|
||||
for alias in "${alias_installs[@]}"; do
|
||||
if [[ -f "$alias" ]]; then
|
||||
# Check if directory requires sudo
|
||||
if [[ ! -w "$(dirname "$alias")" ]]; then
|
||||
if ! sudo rm -f "$alias" 2> /dev/null; then
|
||||
has_error=true
|
||||
@@ -633,16 +570,13 @@ remove_mole() {
|
||||
fi
|
||||
done
|
||||
fi
|
||||
# Clean up cache first (silent)
|
||||
if [[ -d "$HOME/.cache/mole" ]]; then
|
||||
rm -rf "$HOME/.cache/mole" 2> /dev/null || true
|
||||
fi
|
||||
# Clean up configuration last (silent)
|
||||
if [[ -d "$HOME/.config/mole" ]]; then
|
||||
rm -rf "$HOME/.config/mole" 2> /dev/null || true
|
||||
fi
|
||||
|
||||
# Show final result
|
||||
local final_message
|
||||
if [[ "$has_error" == "true" ]]; then
|
||||
final_message="${YELLOW}${ICON_ERROR} Mole uninstalled with some errors, thank you for using Mole!${NC}"
|
||||
@@ -654,38 +588,33 @@ remove_mole() {
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Display main menu options with minimal refresh to avoid flicker
|
||||
# Menu UI
|
||||
show_main_menu() {
|
||||
local selected="${1:-1}"
|
||||
local _full_draw="${2:-true}" # Kept for compatibility (unused)
|
||||
local banner="${MAIN_MENU_BANNER:-}"
|
||||
local update_message="${MAIN_MENU_UPDATE_MESSAGE:-}"
|
||||
|
||||
# Fallback if globals missing (should not happen)
|
||||
if [[ -z "$banner" ]]; then
|
||||
banner="$(show_brand_banner)"
|
||||
MAIN_MENU_BANNER="$banner"
|
||||
fi
|
||||
|
||||
printf '\033[H' # Move cursor to home
|
||||
printf '\033[H'
|
||||
|
||||
local line=""
|
||||
# Leading spacer
|
||||
printf '\r\033[2K\n'
|
||||
|
||||
# Brand banner
|
||||
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||
printf '\r\033[2K%s\n' "$line"
|
||||
done <<< "$banner"
|
||||
|
||||
# Update notification block (if present)
|
||||
if [[ -n "$update_message" ]]; then
|
||||
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||
printf '\r\033[2K%s\n' "$line"
|
||||
done <<< "$update_message"
|
||||
fi
|
||||
|
||||
# Spacer before menu options
|
||||
printf '\r\033[2K\n'
|
||||
|
||||
printf '\r\033[2K%s\n' "$(show_menu_option 1 "Clean Free up disk space" "$([[ $selected -eq 1 ]] && echo true || echo false)")"
|
||||
@@ -696,7 +625,6 @@ show_main_menu() {
|
||||
|
||||
if [[ -t 0 ]]; then
|
||||
printf '\r\033[2K\n'
|
||||
# Show TouchID if not configured, otherwise show Update
|
||||
local controls="${GRAY}↑↓ | Enter | M More | "
|
||||
if ! is_touchid_configured; then
|
||||
controls="${controls}T TouchID"
|
||||
@@ -708,13 +636,10 @@ show_main_menu() {
|
||||
printf '\r\033[2K\n'
|
||||
fi
|
||||
|
||||
# Clear any remaining content below without full screen wipe
|
||||
printf '\033[J'
|
||||
}
|
||||
|
||||
# Interactive main menu loop
|
||||
interactive_main_menu() {
|
||||
# Show intro animation only once per terminal tab
|
||||
if [[ -t 1 ]]; then
|
||||
local tty_name
|
||||
tty_name=$(tty 2> /dev/null || echo "")
|
||||
@@ -820,13 +745,12 @@ interactive_main_menu() {
|
||||
"QUIT") cleanup_and_exit ;;
|
||||
esac
|
||||
|
||||
# Drain any accumulated input after processing (e.g., touchpad scroll events)
|
||||
drain_pending_input
|
||||
done
|
||||
}
|
||||
|
||||
# CLI dispatch
|
||||
main() {
|
||||
# Parse global flags
|
||||
local -a args=()
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
|
||||
@@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Build Universal Binary for analyze-go
|
||||
# Supports both Apple Silicon and Intel Macs
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
# Check if Go is installed
|
||||
if ! command -v go > /dev/null 2>&1; then
|
||||
echo "Error: Go not installed"
|
||||
echo "Install: brew install go"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Building analyze-go for multiple architectures..."
|
||||
|
||||
# Get version info
|
||||
VERSION=$(git describe --tags --always --dirty 2> /dev/null || echo "dev")
|
||||
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S')
|
||||
LDFLAGS="-s -w -X main.Version=$VERSION -X main.BuildTime=$BUILD_TIME"
|
||||
|
||||
echo " Version: $VERSION"
|
||||
echo " Build time: $BUILD_TIME"
|
||||
echo ""
|
||||
|
||||
# Build for arm64 (Apple Silicon)
|
||||
echo " → Building for arm64..."
|
||||
GOARCH=arm64 go build -ldflags="$LDFLAGS" -trimpath -o bin/analyze-go-arm64 ./cmd/analyze
|
||||
|
||||
# Build for amd64 (Intel)
|
||||
echo " → Building for amd64..."
|
||||
GOARCH=amd64 go build -ldflags="$LDFLAGS" -trimpath -o bin/analyze-go-amd64 ./cmd/analyze
|
||||
|
||||
# Create Universal Binary
|
||||
echo " → Creating Universal Binary..."
|
||||
lipo -create bin/analyze-go-arm64 bin/analyze-go-amd64 -output bin/analyze-go
|
||||
|
||||
# Clean up temporary files
|
||||
rm bin/analyze-go-arm64 bin/analyze-go-amd64
|
||||
|
||||
# Verify
|
||||
echo ""
|
||||
echo "✓ Build complete!"
|
||||
echo ""
|
||||
file bin/analyze-go
|
||||
size_bytes=$(/usr/bin/stat -f%z bin/analyze-go 2> /dev/null || echo 0)
|
||||
size_mb=$((size_bytes / 1024 / 1024))
|
||||
printf "Size: %d MB (%d bytes)\n" "$size_mb" "$size_bytes"
|
||||
echo ""
|
||||
echo "Binary supports: arm64 (Apple Silicon) + x86_64 (Intel)"
|
||||
@@ -1,44 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Build Universal Binary for status-go
|
||||
# Supports both Apple Silicon and Intel Macs
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
if ! command -v go > /dev/null 2>&1; then
|
||||
echo "Error: Go not installed"
|
||||
echo "Install: brew install go"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Building status-go for multiple architectures..."
|
||||
|
||||
VERSION=$(git describe --tags --always --dirty 2> /dev/null || echo "dev")
|
||||
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S')
|
||||
LDFLAGS="-s -w -X main.Version=$VERSION -X main.BuildTime=$BUILD_TIME"
|
||||
|
||||
echo " Version: $VERSION"
|
||||
echo " Build time: $BUILD_TIME"
|
||||
echo ""
|
||||
|
||||
echo " → Building for arm64..."
|
||||
GOARCH=arm64 go build -ldflags="$LDFLAGS" -trimpath -o bin/status-go-arm64 ./cmd/status
|
||||
|
||||
echo " → Building for amd64..."
|
||||
GOARCH=amd64 go build -ldflags="$LDFLAGS" -trimpath -o bin/status-go-amd64 ./cmd/status
|
||||
|
||||
echo " → Creating Universal Binary..."
|
||||
lipo -create bin/status-go-arm64 bin/status-go-amd64 -output bin/status-go
|
||||
|
||||
rm bin/status-go-arm64 bin/status-go-amd64
|
||||
|
||||
echo ""
|
||||
echo "✓ Build complete!"
|
||||
echo ""
|
||||
file bin/status-go
|
||||
size_bytes=$(/usr/bin/stat -f%z bin/status-go 2> /dev/null || echo 0)
|
||||
size_mb=$((size_bytes / 1024 / 1024))
|
||||
printf "Size: %d MB (%d bytes)\n" "$size_mb" "$size_bytes"
|
||||
echo ""
|
||||
echo "Binary supports: arm64 (Apple Silicon) + x86_64 (Intel)"
|
||||
@@ -1,71 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Pre-commit hook to ensure bin/analyze-go and bin/status-go are universal binaries
|
||||
|
||||
set -e
|
||||
|
||||
# ANSI color codes
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Check if binaries are being added or modified (ignore deletions)
|
||||
binaries=()
|
||||
while read -r status path; do
|
||||
case "$status" in
|
||||
A|M)
|
||||
if [[ "$path" == "bin/analyze-go" || "$path" == "bin/status-go" ]]; then
|
||||
binaries+=("$path")
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
done < <(git diff --cached --name-status)
|
||||
|
||||
# If no binaries are being committed, exit early
|
||||
if [[ ${#binaries[@]} -eq 0 ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Checking compiled binaries...${NC}"
|
||||
|
||||
# Verify each binary is a universal binary
|
||||
all_valid=true
|
||||
for binary in "${binaries[@]}"; do
|
||||
if [[ ! -f "$binary" ]]; then
|
||||
echo -e "${RED}✗ $binary not found${NC}"
|
||||
all_valid=false
|
||||
continue
|
||||
fi
|
||||
|
||||
# Check if it's a universal binary
|
||||
if file "$binary" | grep -q "Mach-O universal binary"; then
|
||||
# Verify it contains both x86_64 and arm64
|
||||
if lipo -info "$binary" 2>/dev/null | grep -q "x86_64 arm64"; then
|
||||
echo -e "${GREEN}✓ $binary is a universal binary (x86_64 + arm64)${NC}"
|
||||
elif lipo -info "$binary" 2>/dev/null | grep -q "arm64 x86_64"; then
|
||||
echo -e "${GREEN}✓ $binary is a universal binary (x86_64 + arm64)${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ $binary is missing required architectures${NC}"
|
||||
lipo -info "$binary"
|
||||
all_valid=false
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}✗ $binary is not a universal binary${NC}"
|
||||
file "$binary"
|
||||
all_valid=false
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$all_valid" == "false" ]]; then
|
||||
echo ""
|
||||
echo -e "${RED}Commit rejected: binaries must be universal (x86_64 + arm64)${NC}"
|
||||
echo ""
|
||||
echo "To create universal binaries, run:"
|
||||
echo " ./scripts/build-analyze.sh"
|
||||
echo " ./scripts/build-status.sh"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}All binaries verified!${NC}"
|
||||
exit 0
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Install git hooks for Mole development
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
HOOKS_DIR="$REPO_ROOT/.git/hooks"
|
||||
|
||||
if [[ ! -d "$REPO_ROOT/.git" ]]; then
|
||||
echo "Error: Not in a git repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Installing git hooks..."
|
||||
|
||||
# Install pre-commit hook
|
||||
if [[ -f "$SCRIPT_DIR/hooks/pre-commit" ]]; then
|
||||
cp "$SCRIPT_DIR/hooks/pre-commit" "$HOOKS_DIR/pre-commit"
|
||||
chmod +x "$HOOKS_DIR/pre-commit"
|
||||
echo "✓ Installed pre-commit hook (validates universal binaries)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Git hooks installed successfully!"
|
||||
echo ""
|
||||
echo "The pre-commit hook will ensure that bin/analyze-go and bin/status-go"
|
||||
echo "are universal binaries (x86_64 + arm64) before allowing commits."
|
||||
Reference in New Issue
Block a user