mirror of
https://github.com/tw93/Mole.git
synced 2026-02-16 17:35:16 +00:00
feat: Enhance clean, optimize, analyze, and status commands, and update security audit documentation.
This commit is contained in:
1
.github/copilot-instructions.md
vendored
1
.github/copilot-instructions.md
vendored
@@ -1 +0,0 @@
|
|||||||
../AGENT.md
|
|
||||||
154
AGENT.md
154
AGENT.md
@@ -1,130 +1,46 @@
|
|||||||
# Mole AI Agent Documentation
|
# Mole AI Agent Notes
|
||||||
|
|
||||||
> **READ THIS FIRST**: This file serves as the single source of truth for any AI agent trying to work on the Mole repository. It aggregates architectural context, development workflows, and behavioral guidelines.
|
Use this file as the single source of truth for how to work on Mole.
|
||||||
|
|
||||||
## 1. Philosophy & Guidelines
|
## Principles
|
||||||
|
|
||||||
### Core Philosophy
|
- Safety first: never risk user data or system stability.
|
||||||
|
- Never run destructive operations that could break the user's machine.
|
||||||
|
- Do not delete user-important files; cleanup must be conservative and reversible.
|
||||||
|
- Always use `safe_*` helpers (no raw `rm -rf`).
|
||||||
|
- Keep changes small and confirm uncertain behavior.
|
||||||
|
- Follow the local code style in the file you are editing (Bash 3.2 compatible).
|
||||||
|
- Comments must be English, concise, and intent-focused.
|
||||||
|
- Use comments for safety boundaries, non-obvious logic, or flow context.
|
||||||
|
- Entry scripts start with ~3 short lines describing purpose/behavior.
|
||||||
|
- Do not remove installer flags `--prefix`/`--config` (update flow depends on them).
|
||||||
|
- Do not commit or submit code changes unless explicitly requested.
|
||||||
|
|
||||||
- **Safety First**: Never risk user data. Always use `safe_*` wrappers. When in doubt, ask.
|
## Architecture
|
||||||
- **Incremental Progress**: Break complex tasks into manageable stages.
|
|
||||||
- **Clear Intent**: Prioritize readability and maintainability over clever hacks.
|
|
||||||
- **Native Performance**: Use Go for heavy lifting (scanning), Bash for system glue.
|
|
||||||
|
|
||||||
### Eight Honors and Eight Shames
|
- `mole`: main CLI entrypoint (menu + command routing).
|
||||||
|
- `mo`: CLI alias wrapper.
|
||||||
|
- `install.sh`: manual installer/updater (download/build + install).
|
||||||
|
- `bin/`: command entry points (`clean.sh`, `uninstall.sh`, `optimize.sh`, `purge.sh`, `touchid.sh`,
|
||||||
|
`analyze.sh`, `status.sh`).
|
||||||
|
- `lib/`: shell logic (`core/`, `clean/`, `ui/`).
|
||||||
|
- `cmd/`: Go apps (`analyze/`, `status/`).
|
||||||
|
- `scripts/`: build/test helpers.
|
||||||
|
- `tests/`: BATS integration tests.
|
||||||
|
|
||||||
- **Shame** in guessing APIs, **Honor** in careful research.
|
## Workflow
|
||||||
- **Shame** in vague execution, **Honor** in seeking confirmation.
|
|
||||||
- **Shame** in assuming business logic, **Honor** in human verification.
|
|
||||||
- **Shame** in creating interfaces, **Honor** in reusing existing ones.
|
|
||||||
- **Shame** in skipping validation, **Honor** in proactive testing.
|
|
||||||
- **Shame** in breaking architecture, **Honor** in following specifications.
|
|
||||||
- **Shame** in pretending to understand, **Honor** in honest ignorance.
|
|
||||||
- **Shame** in blind modification, **Honor** in careful refactoring.
|
|
||||||
|
|
||||||
### Quality Standards
|
- Shell work: add logic under `lib/`, call from `bin/`.
|
||||||
|
- Go work: edit `cmd/<app>/*.go`.
|
||||||
|
- Prefer dry-run modes while validating cleanup behavior.
|
||||||
|
|
||||||
- **English Only**: Comments and code must be in English.
|
## Build & Test
|
||||||
- **No Unnecessary Comments**: Code should be self-explanatory.
|
|
||||||
- **Pure Shell Style**: Use `[[ ]]` over `[ ]`, avoid `local var` assignments on definition line if exit code matters.
|
|
||||||
- **Go Formatting**: Always run `gofmt` (or let the build script do it).
|
|
||||||
|
|
||||||
## 2. Project Identity
|
- `./scripts/run-tests.sh` runs lint/shell/go tests.
|
||||||
|
- `make build` builds Go binaries for local development.
|
||||||
|
- `go run ./cmd/analyze` for dev runs without building.
|
||||||
|
|
||||||
- **Name**: Mole
|
## Key Behaviors
|
||||||
- **Purpose**: A lightweight, robust macOS cleanup and system analysis tool.
|
|
||||||
- **Core Value**: Native, fast, safe, and dependency-free (pure Bash + static Go binary).
|
|
||||||
- **Mechanism**:
|
|
||||||
- **Cleaning**: Pure Bash scripts for transparency and safety.
|
|
||||||
- **Analysis**: High-concurrency Go TUI (Bubble Tea) for disk scanning.
|
|
||||||
- **Monitoring**: Real-time Go TUI for system status.
|
|
||||||
|
|
||||||
## 3. Technology Stack
|
- `mole update` uses `install.sh` with `--prefix`/`--config`; keep these flags.
|
||||||
|
- Cleanup must go through `safe_*` and respect protection lists.
|
||||||
- **Shell**: Bash 3.2+ (macOS default compatible).
|
|
||||||
- **Go**: Latest Stable (Bubble Tea framework).
|
|
||||||
- **Testing**:
|
|
||||||
- **Shell**: `bats-core`, `shellcheck`.
|
|
||||||
- **Go**: Native `testing` package.
|
|
||||||
|
|
||||||
## 4. Repository Architecture
|
|
||||||
|
|
||||||
### Directory Structure
|
|
||||||
|
|
||||||
- **`bin/`**: Standalone entry points.
|
|
||||||
- `mole`: Main CLI wrapper.
|
|
||||||
- `clean.sh`, `uninstall.sh`: Logic wrappers calling `lib/`.
|
|
||||||
- **`cmd/`**: Go applications.
|
|
||||||
- `analyze/`: Disk space analyzer (concurrent, TUI).
|
|
||||||
- `status/`: System monitor (TUI).
|
|
||||||
- **`lib/`**: Core Shell Logic.
|
|
||||||
- `core/`: Low-level utilities (logging, `safe_remove`, sudo helpers).
|
|
||||||
- `clean/`: Domain-specific cleanup tasks (`brew`, `caches`, `system`).
|
|
||||||
- `ui/`: Reusable TUI components (`menu_paginated.sh`).
|
|
||||||
- **`scripts/`**: Development tools (`run-tests.sh`, `build-analyze.sh`).
|
|
||||||
- **`tests/`**: BATS integration tests.
|
|
||||||
|
|
||||||
## 5. Key Workflows
|
|
||||||
|
|
||||||
### Development
|
|
||||||
|
|
||||||
1. **Understand**: Read `lib/core/` to know what tools are available.
|
|
||||||
2. **Implement**:
|
|
||||||
- For Shell: Add functions to `lib/`, source them in `bin/`.
|
|
||||||
- For Go: Edit `cmd/app/*.go`.
|
|
||||||
3. **Verify**: Use dry-run modes first.
|
|
||||||
|
|
||||||
**Commands**:
|
|
||||||
|
|
||||||
- `./scripts/run-tests.sh`: **Run EVERYTHING** (Lint, Syntax, Unit, Go).
|
|
||||||
- `./bin/clean.sh --dry-run`: Test cleanup logic safely.
|
|
||||||
- `go run ./cmd/analyze`: Run analyzer in dev mode.
|
|
||||||
|
|
||||||
### Building
|
|
||||||
|
|
||||||
- `./scripts/build-analyze.sh`: Compiles `analyze-go` binary (Universal).
|
|
||||||
- `./scripts/build-status.sh`: Compiles `status-go` binary.
|
|
||||||
|
|
||||||
### Release
|
|
||||||
|
|
||||||
- Versions managed via git tags.
|
|
||||||
- Build scripts embed version info into binaries.
|
|
||||||
|
|
||||||
## 6. Implementation Details
|
|
||||||
|
|
||||||
### Safety System (`lib/core/file_ops.sh`)
|
|
||||||
|
|
||||||
- **Crucial**: Never use `rm -rf` directly.
|
|
||||||
- **Use**:
|
|
||||||
- `safe_remove "/path"`
|
|
||||||
- `safe_find_delete "/path" "*.log" 7 "f"`
|
|
||||||
- **Protection**:
|
|
||||||
- `validate_path_for_deletion` prevents root/system deletion.
|
|
||||||
- `checks` ensure path is absolute and safe.
|
|
||||||
|
|
||||||
### Go Concurrency (`cmd/analyze`)
|
|
||||||
|
|
||||||
- **Worker Pool**: Tuned dynamically (16-64 workers) to respect system load.
|
|
||||||
- **Throttling**: UI updates throttled (every 100 items) to keep TUI responsive (80ms tick).
|
|
||||||
- **Memory**: Uses Heaps for top-file tracking to minimize RAM usage.
|
|
||||||
|
|
||||||
### TUI Unification
|
|
||||||
|
|
||||||
- **Keybindings**: `j/k` (Nav), `space` (Select), `enter` (Action), `R` (Refresh).
|
|
||||||
- **Style**: Compact footers ` | ` and standard colors defined in `lib/core/base.sh` or Go constants.
|
|
||||||
|
|
||||||
## 7. Common AI Tasks
|
|
||||||
|
|
||||||
- **Adding a Cleanup Task**:
|
|
||||||
1. Create/Edit `lib/clean/topic.sh`.
|
|
||||||
2. Define `clean_topic()`.
|
|
||||||
3. Register in `lib/optimize/tasks.sh` or `bin/clean.sh`.
|
|
||||||
4. **MUST** use `safe_*` functions.
|
|
||||||
- **Modifying Go UI**:
|
|
||||||
1. Update `model` struct in `main.go`.
|
|
||||||
2. Update `View()` in `view.go`.
|
|
||||||
3. Run `./scripts/build-analyze.sh` to test.
|
|
||||||
- **Fixing a Bug**:
|
|
||||||
1. Reproduce with a new BATS test in `tests/`.
|
|
||||||
2. Fix logic.
|
|
||||||
3. Verify with `./scripts/run-tests.sh`.
|
|
||||||
|
|||||||
@@ -5,9 +5,6 @@
|
|||||||
```bash
|
```bash
|
||||||
# Install development tools
|
# Install development tools
|
||||||
brew install shfmt shellcheck bats-core
|
brew install shfmt shellcheck bats-core
|
||||||
|
|
||||||
# Install git hooks (validates universal binaries)
|
|
||||||
./scripts/setup-hooks.sh
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
@@ -31,7 +28,7 @@ Individual commands:
|
|||||||
./scripts/format.sh
|
./scripts/format.sh
|
||||||
|
|
||||||
# Run tests only
|
# Run tests only
|
||||||
./tests/run.sh
|
./scripts/run-tests.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
## Code Style
|
## Code Style
|
||||||
@@ -158,23 +155,20 @@ Format: `[MODULE_NAME] message` output to stderr.
|
|||||||
- Run `go vet ./cmd/...` to check for issues
|
- Run `go vet ./cmd/...` to check for issues
|
||||||
- Build with `go build ./...` to verify all packages compile
|
- Build with `go build ./...` to verify all packages compile
|
||||||
|
|
||||||
**Building Universal Binaries:**
|
**Building Go Binaries:**
|
||||||
|
|
||||||
⚠️ **IMPORTANT**: Never use `go build` directly to create `bin/analyze-go` or `bin/status-go`!
|
For local development:
|
||||||
|
|
||||||
Mole must support both Intel and Apple Silicon Macs. Always use the build scripts:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Build universal binaries (x86_64 + arm64)
|
# Build binaries for current architecture
|
||||||
./scripts/build-analyze.sh
|
make build
|
||||||
./scripts/build-status.sh
|
|
||||||
|
# Or run directly without building
|
||||||
|
go run ./cmd/analyze
|
||||||
|
go run ./cmd/status
|
||||||
```
|
```
|
||||||
|
|
||||||
For local development/testing, you can use:
|
For releases, GitHub Actions builds architecture-specific binaries automatically.
|
||||||
- `go run ./cmd/status` or `go run ./cmd/analyze` (quick iteration)
|
|
||||||
- `go build ./cmd/status` (creates single-arch binary for testing)
|
|
||||||
|
|
||||||
The pre-commit hook will prevent you from accidentally committing non-universal binaries.
|
|
||||||
|
|
||||||
**Guidelines:**
|
**Guidelines:**
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
**Security Audit & Compliance Report**
|
**Security Audit & Compliance Report**
|
||||||
|
|
||||||
Version 1.15.9 | December 29, 2025
|
Version 1.17.0 | December 31, 2025
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -31,9 +31,9 @@ Version 1.15.9 | December 29, 2025
|
|||||||
|
|
||||||
| Attribute | Details |
|
| Attribute | Details |
|
||||||
|-----------|---------|
|
|-----------|---------|
|
||||||
| Audit Date | December 29, 2025 |
|
| Audit Date | December 31, 2025 |
|
||||||
| Audit Conclusion | **PASSED** |
|
| Audit Conclusion | **PASSED** |
|
||||||
| Mole Version | V1.15.9 |
|
| Mole Version | V1.17.0 |
|
||||||
| Audited Branch | `main` (HEAD) |
|
| Audited Branch | `main` (HEAD) |
|
||||||
| Scope | Shell scripts, Go binaries, Configuration |
|
| Scope | Shell scripts, Go binaries, Configuration |
|
||||||
| Methodology | Static analysis, Threat modeling, Code review |
|
| Methodology | Static analysis, Threat modeling, Code review |
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Entry point for the Go-based disk analyzer binary bundled with Mole.
|
# Mole - Analyze command.
|
||||||
|
# Runs the Go disk analyzer UI.
|
||||||
|
# Uses bundled analyze-go binary.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
|
|||||||
136
bin/clean.sh
136
bin/clean.sh
@@ -1,6 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Mole - Deeper system cleanup
|
# Mole - Clean command.
|
||||||
# Complete cleanup with smart password handling
|
# Runs cleanup modules with optional sudo.
|
||||||
|
# Supports dry-run and whitelist.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
@@ -88,8 +89,7 @@ else
|
|||||||
WHITELIST_PATTERNS=("${DEFAULT_WHITELIST_PATTERNS[@]}")
|
WHITELIST_PATTERNS=("${DEFAULT_WHITELIST_PATTERNS[@]}")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Pre-expand tildes in whitelist patterns once to avoid repetitive expansion in loops
|
# Expand whitelist patterns once to avoid repeated tilde expansion in hot loops.
|
||||||
# This significantly improves performance when checking thousands of files
|
|
||||||
expand_whitelist_patterns() {
|
expand_whitelist_patterns() {
|
||||||
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||||
local -a EXPANDED_PATTERNS
|
local -a EXPANDED_PATTERNS
|
||||||
@@ -112,7 +112,7 @@ if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
|||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Global tracking variables (initialized in perform_cleanup)
|
# Section tracking and summary counters.
|
||||||
total_items=0
|
total_items=0
|
||||||
TRACK_SECTION=0
|
TRACK_SECTION=0
|
||||||
SECTION_ACTIVITY=0
|
SECTION_ACTIVITY=0
|
||||||
@@ -127,31 +127,25 @@ note_activity() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Cleanup background processes
|
|
||||||
CLEANUP_DONE=false
|
CLEANUP_DONE=false
|
||||||
# shellcheck disable=SC2329
|
# shellcheck disable=SC2329
|
||||||
cleanup() {
|
cleanup() {
|
||||||
local signal="${1:-EXIT}"
|
local signal="${1:-EXIT}"
|
||||||
local exit_code="${2:-$?}"
|
local exit_code="${2:-$?}"
|
||||||
|
|
||||||
# Prevent multiple executions
|
|
||||||
if [[ "$CLEANUP_DONE" == "true" ]]; then
|
if [[ "$CLEANUP_DONE" == "true" ]]; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
CLEANUP_DONE=true
|
CLEANUP_DONE=true
|
||||||
|
|
||||||
# Stop any inline spinner
|
|
||||||
stop_inline_spinner 2> /dev/null || true
|
stop_inline_spinner 2> /dev/null || true
|
||||||
|
|
||||||
# Clear any spinner output - spinner outputs to stderr
|
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
printf "\r\033[K" >&2 || true
|
printf "\r\033[K" >&2 || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Clean up temporary files
|
|
||||||
cleanup_temp_files
|
cleanup_temp_files
|
||||||
|
|
||||||
# Stop sudo session
|
|
||||||
stop_sudo_session
|
stop_sudo_session
|
||||||
|
|
||||||
show_cursor
|
show_cursor
|
||||||
@@ -172,7 +166,6 @@ start_section() {
|
|||||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Preparing..."
|
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Preparing..."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Write section header to export list in dry-run mode
|
|
||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
ensure_user_file "$EXPORT_LIST_FILE"
|
ensure_user_file "$EXPORT_LIST_FILE"
|
||||||
echo "" >> "$EXPORT_LIST_FILE"
|
echo "" >> "$EXPORT_LIST_FILE"
|
||||||
@@ -240,11 +233,9 @@ normalize_paths_for_cleanup() {
|
|||||||
get_cleanup_path_size_kb() {
|
get_cleanup_path_size_kb() {
|
||||||
local path="$1"
|
local path="$1"
|
||||||
|
|
||||||
# Optimization: Use stat for regular files (much faster than du)
|
|
||||||
if [[ -f "$path" && ! -L "$path" ]]; then
|
if [[ -f "$path" && ! -L "$path" ]]; then
|
||||||
if command -v stat > /dev/null 2>&1; then
|
if command -v stat > /dev/null 2>&1; then
|
||||||
local bytes
|
local bytes
|
||||||
# macOS/BSD stat
|
|
||||||
bytes=$(stat -f%z "$path" 2> /dev/null || echo "0")
|
bytes=$(stat -f%z "$path" 2> /dev/null || echo "0")
|
||||||
if [[ "$bytes" =~ ^[0-9]+$ && "$bytes" -gt 0 ]]; then
|
if [[ "$bytes" =~ ^[0-9]+$ && "$bytes" -gt 0 ]]; then
|
||||||
echo $(((bytes + 1023) / 1024))
|
echo $(((bytes + 1023) / 1024))
|
||||||
@@ -286,9 +277,7 @@ safe_clean() {
|
|||||||
description="$1"
|
description="$1"
|
||||||
targets=("$1")
|
targets=("$1")
|
||||||
else
|
else
|
||||||
# Get last argument as description
|
|
||||||
description="${*: -1}"
|
description="${*: -1}"
|
||||||
# Get all arguments except last as targets array
|
|
||||||
targets=("${@:1:$#-1}")
|
targets=("${@:1:$#-1}")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -305,12 +294,10 @@ safe_clean() {
|
|||||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning ${#targets[@]} items..."
|
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning ${#targets[@]} items..."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Optimized parallel processing for better performance
|
|
||||||
local -a existing_paths=()
|
local -a existing_paths=()
|
||||||
for path in "${targets[@]}"; do
|
for path in "${targets[@]}"; do
|
||||||
local skip=false
|
local skip=false
|
||||||
|
|
||||||
# Centralized protection for critical apps and system components
|
|
||||||
if should_protect_path "$path"; then
|
if should_protect_path "$path"; then
|
||||||
skip=true
|
skip=true
|
||||||
((skipped_count++))
|
((skipped_count++))
|
||||||
@@ -318,7 +305,6 @@ safe_clean() {
|
|||||||
|
|
||||||
[[ "$skip" == "true" ]] && continue
|
[[ "$skip" == "true" ]] && continue
|
||||||
|
|
||||||
# Check user-defined whitelist
|
|
||||||
if is_path_whitelisted "$path"; then
|
if is_path_whitelisted "$path"; then
|
||||||
skip=true
|
skip=true
|
||||||
((skipped_count++))
|
((skipped_count++))
|
||||||
@@ -333,7 +319,6 @@ safe_clean() {
|
|||||||
|
|
||||||
debug_log "Cleaning: $description (${#existing_paths[@]} items)"
|
debug_log "Cleaning: $description (${#existing_paths[@]} items)"
|
||||||
|
|
||||||
# Update global whitelist skip counter
|
|
||||||
if [[ $skipped_count -gt 0 ]]; then
|
if [[ $skipped_count -gt 0 ]]; then
|
||||||
((whitelist_skipped_count += skipped_count))
|
((whitelist_skipped_count += skipped_count))
|
||||||
fi
|
fi
|
||||||
@@ -355,7 +340,6 @@ safe_clean() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Only show spinner if we have enough items to justify it (>10 items)
|
|
||||||
local show_spinner=false
|
local show_spinner=false
|
||||||
if [[ ${#existing_paths[@]} -gt 10 ]]; then
|
if [[ ${#existing_paths[@]} -gt 10 ]]; then
|
||||||
show_spinner=true
|
show_spinner=true
|
||||||
@@ -363,14 +347,11 @@ safe_clean() {
|
|||||||
if [[ -t 1 ]]; then MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning items..."; fi
|
if [[ -t 1 ]]; then MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning items..."; fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# For larger batches, precompute sizes in parallel for better UX/stat accuracy.
|
||||||
if [[ ${#existing_paths[@]} -gt 3 ]]; then
|
if [[ ${#existing_paths[@]} -gt 3 ]]; then
|
||||||
local temp_dir
|
local temp_dir
|
||||||
# create_temp_dir uses mktemp -d for secure temporary directory creation
|
|
||||||
temp_dir=$(create_temp_dir)
|
temp_dir=$(create_temp_dir)
|
||||||
|
|
||||||
# Check if we have many small files - in that case parallel overhead > benefit
|
|
||||||
# If most items are files (not dirs), avoidance of subshells is faster
|
|
||||||
# Sample up to 20 items or 20% of items (whichever is larger) for better accuracy
|
|
||||||
local dir_count=0
|
local dir_count=0
|
||||||
local sample_size=$((${#existing_paths[@]} > 20 ? 20 : ${#existing_paths[@]}))
|
local sample_size=$((${#existing_paths[@]} > 20 ? 20 : ${#existing_paths[@]}))
|
||||||
local max_sample=$((${#existing_paths[@]} * 20 / 100))
|
local max_sample=$((${#existing_paths[@]} * 20 / 100))
|
||||||
@@ -380,8 +361,7 @@ safe_clean() {
|
|||||||
[[ -d "${existing_paths[i]}" ]] && ((dir_count++))
|
[[ -d "${existing_paths[i]}" ]] && ((dir_count++))
|
||||||
done
|
done
|
||||||
|
|
||||||
# If we have mostly files and few directories, use sequential processing
|
# Heuristic: mostly files -> sequential stat is faster than subshells.
|
||||||
# Subshells for 50+ files is very slow compared to direct stat
|
|
||||||
if [[ $dir_count -lt 5 && ${#existing_paths[@]} -gt 20 ]]; then
|
if [[ $dir_count -lt 5 && ${#existing_paths[@]} -gt 20 ]]; then
|
||||||
if [[ -t 1 && "$show_spinner" == "false" ]]; then
|
if [[ -t 1 && "$show_spinner" == "false" ]]; then
|
||||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning items..."
|
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning items..."
|
||||||
@@ -395,7 +375,6 @@ safe_clean() {
|
|||||||
size=$(get_cleanup_path_size_kb "$path")
|
size=$(get_cleanup_path_size_kb "$path")
|
||||||
[[ ! "$size" =~ ^[0-9]+$ ]] && size=0
|
[[ ! "$size" =~ ^[0-9]+$ ]] && size=0
|
||||||
|
|
||||||
# Write result to file to maintain compatibility with the logic below
|
|
||||||
if [[ "$size" -gt 0 ]]; then
|
if [[ "$size" -gt 0 ]]; then
|
||||||
echo "$size 1" > "$temp_dir/result_${idx}"
|
echo "$size 1" > "$temp_dir/result_${idx}"
|
||||||
else
|
else
|
||||||
@@ -403,14 +382,12 @@ safe_clean() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
((idx++))
|
((idx++))
|
||||||
# Provide UI feedback periodically
|
|
||||||
if [[ $((idx % 20)) -eq 0 && "$show_spinner" == "true" && -t 1 ]]; then
|
if [[ $((idx % 20)) -eq 0 && "$show_spinner" == "true" && -t 1 ]]; then
|
||||||
update_progress_if_needed "$idx" "${#existing_paths[@]}" last_progress_update 1 || true
|
update_progress_if_needed "$idx" "${#existing_paths[@]}" last_progress_update 1 || true
|
||||||
last_progress_update=$(date +%s)
|
last_progress_update=$(date +%s)
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
# Parallel processing (bash 3.2 compatible)
|
|
||||||
local -a pids=()
|
local -a pids=()
|
||||||
local idx=0
|
local idx=0
|
||||||
local completed=0
|
local completed=0
|
||||||
@@ -422,12 +399,8 @@ safe_clean() {
|
|||||||
(
|
(
|
||||||
local size
|
local size
|
||||||
size=$(get_cleanup_path_size_kb "$path")
|
size=$(get_cleanup_path_size_kb "$path")
|
||||||
# Ensure size is numeric (additional safety layer)
|
|
||||||
[[ ! "$size" =~ ^[0-9]+$ ]] && size=0
|
[[ ! "$size" =~ ^[0-9]+$ ]] && size=0
|
||||||
# Use index + PID for unique filename
|
|
||||||
local tmp_file="$temp_dir/result_${idx}.$$"
|
local tmp_file="$temp_dir/result_${idx}.$$"
|
||||||
# Optimization: Skip expensive file counting. Size is the key metric.
|
|
||||||
# Just indicate presence if size > 0
|
|
||||||
if [[ "$size" -gt 0 ]]; then
|
if [[ "$size" -gt 0 ]]; then
|
||||||
echo "$size 1" > "$tmp_file"
|
echo "$size 1" > "$tmp_file"
|
||||||
else
|
else
|
||||||
@@ -443,7 +416,6 @@ safe_clean() {
|
|||||||
pids=("${pids[@]:1}")
|
pids=("${pids[@]:1}")
|
||||||
((completed++))
|
((completed++))
|
||||||
|
|
||||||
# Update progress using helper function
|
|
||||||
if [[ "$show_spinner" == "true" && -t 1 ]]; then
|
if [[ "$show_spinner" == "true" && -t 1 ]]; then
|
||||||
update_progress_if_needed "$completed" "$total_paths" last_progress_update 2 || true
|
update_progress_if_needed "$completed" "$total_paths" last_progress_update 2 || true
|
||||||
fi
|
fi
|
||||||
@@ -456,7 +428,6 @@ safe_clean() {
|
|||||||
wait "$pid" 2> /dev/null || true
|
wait "$pid" 2> /dev/null || true
|
||||||
((completed++))
|
((completed++))
|
||||||
|
|
||||||
# Update progress using helper function
|
|
||||||
if [[ "$show_spinner" == "true" && -t 1 ]]; then
|
if [[ "$show_spinner" == "true" && -t 1 ]]; then
|
||||||
update_progress_if_needed "$completed" "$total_paths" last_progress_update 2 || true
|
update_progress_if_needed "$completed" "$total_paths" last_progress_update 2 || true
|
||||||
fi
|
fi
|
||||||
@@ -464,24 +435,15 @@ safe_clean() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Read results using same index
|
# Read results back in original order.
|
||||||
idx=0
|
idx=0
|
||||||
if [[ ${#existing_paths[@]} -gt 0 ]]; then
|
if [[ ${#existing_paths[@]} -gt 0 ]]; then
|
||||||
for path in "${existing_paths[@]}"; do
|
for path in "${existing_paths[@]}"; do
|
||||||
local result_file="$temp_dir/result_${idx}"
|
local result_file="$temp_dir/result_${idx}"
|
||||||
if [[ -f "$result_file" ]]; then
|
if [[ -f "$result_file" ]]; then
|
||||||
read -r size count < "$result_file" 2> /dev/null || true
|
read -r size count < "$result_file" 2> /dev/null || true
|
||||||
# Even if size is 0 or du failed, we should try to remove the file if it was found
|
|
||||||
# count > 0 means the file existed at scan time (or we forced it to 1)
|
|
||||||
|
|
||||||
# Correction: The subshell now writes "size 1" if size>0, or "0 0" if size=0
|
|
||||||
# But we want to delete even if size is 0.
|
|
||||||
# Let's check if the path still exists to be safe, or trust the input list.
|
|
||||||
# Actually, safe_remove checks existence.
|
|
||||||
|
|
||||||
local removed=0
|
local removed=0
|
||||||
if [[ "$DRY_RUN" != "true" ]]; then
|
if [[ "$DRY_RUN" != "true" ]]; then
|
||||||
# Handle symbolic links separately (only remove the link, not the target)
|
|
||||||
if [[ -L "$path" ]]; then
|
if [[ -L "$path" ]]; then
|
||||||
rm "$path" 2> /dev/null && removed=1
|
rm "$path" 2> /dev/null && removed=1
|
||||||
else
|
else
|
||||||
@@ -500,8 +462,6 @@ safe_clean() {
|
|||||||
((total_count += 1))
|
((total_count += 1))
|
||||||
removed_any=1
|
removed_any=1
|
||||||
else
|
else
|
||||||
# Only increment failure count if we actually tried and failed
|
|
||||||
# Check existence to avoid false failure report for already gone files
|
|
||||||
if [[ -e "$path" && "$DRY_RUN" != "true" ]]; then
|
if [[ -e "$path" && "$DRY_RUN" != "true" ]]; then
|
||||||
((removal_failed_count++))
|
((removal_failed_count++))
|
||||||
fi
|
fi
|
||||||
@@ -511,22 +471,16 @@ safe_clean() {
|
|||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Temp dir will be auto-cleaned by cleanup_temp_files
|
|
||||||
else
|
else
|
||||||
local idx=0
|
local idx=0
|
||||||
if [[ ${#existing_paths[@]} -gt 0 ]]; then
|
if [[ ${#existing_paths[@]} -gt 0 ]]; then
|
||||||
for path in "${existing_paths[@]}"; do
|
for path in "${existing_paths[@]}"; do
|
||||||
local size_kb
|
local size_kb
|
||||||
size_kb=$(get_cleanup_path_size_kb "$path")
|
size_kb=$(get_cleanup_path_size_kb "$path")
|
||||||
# Ensure size_kb is numeric (additional safety layer)
|
|
||||||
[[ ! "$size_kb" =~ ^[0-9]+$ ]] && size_kb=0
|
[[ ! "$size_kb" =~ ^[0-9]+$ ]] && size_kb=0
|
||||||
|
|
||||||
# Optimization: Skip expensive file counting, but DO NOT skip deletion if size is 0
|
|
||||||
# Previously: if [[ "$size_kb" -gt 0 ]]; then ...
|
|
||||||
|
|
||||||
local removed=0
|
local removed=0
|
||||||
if [[ "$DRY_RUN" != "true" ]]; then
|
if [[ "$DRY_RUN" != "true" ]]; then
|
||||||
# Handle symbolic links separately (only remove the link, not the target)
|
|
||||||
if [[ -L "$path" ]]; then
|
if [[ -L "$path" ]]; then
|
||||||
rm "$path" 2> /dev/null && removed=1
|
rm "$path" 2> /dev/null && removed=1
|
||||||
else
|
else
|
||||||
@@ -545,7 +499,6 @@ safe_clean() {
|
|||||||
((total_count += 1))
|
((total_count += 1))
|
||||||
removed_any=1
|
removed_any=1
|
||||||
else
|
else
|
||||||
# Only increment failure count if we actually tried and failed
|
|
||||||
if [[ -e "$path" && "$DRY_RUN" != "true" ]]; then
|
if [[ -e "$path" && "$DRY_RUN" != "true" ]]; then
|
||||||
((removal_failed_count++))
|
((removal_failed_count++))
|
||||||
fi
|
fi
|
||||||
@@ -559,13 +512,12 @@ safe_clean() {
|
|||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Track permission failures reported by safe_remove
|
|
||||||
local permission_end=${MOLE_PERMISSION_DENIED_COUNT:-0}
|
local permission_end=${MOLE_PERMISSION_DENIED_COUNT:-0}
|
||||||
|
# Track permission failures in debug output (avoid noisy user warnings).
|
||||||
if [[ $permission_end -gt $permission_start && $removed_any -eq 0 ]]; then
|
if [[ $permission_end -gt $permission_start && $removed_any -eq 0 ]]; then
|
||||||
debug_log "Permission denied while cleaning: $description"
|
debug_log "Permission denied while cleaning: $description"
|
||||||
fi
|
fi
|
||||||
if [[ $removal_failed_count -gt 0 && "$DRY_RUN" != "true" ]]; then
|
if [[ $removal_failed_count -gt 0 && "$DRY_RUN" != "true" ]]; then
|
||||||
# Log to debug instead of showing warning to user (avoid confusion)
|
|
||||||
debug_log "Skipped $removal_failed_count items (permission denied or in use) for: $description"
|
debug_log "Skipped $removal_failed_count items (permission denied or in use) for: $description"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -580,7 +532,6 @@ safe_clean() {
|
|||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} $label ${YELLOW}($size_human dry)${NC}"
|
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} $label ${YELLOW}($size_human dry)${NC}"
|
||||||
|
|
||||||
# Group paths by parent directory for export (Bash 3.2 compatible)
|
|
||||||
local paths_temp=$(create_temp_file)
|
local paths_temp=$(create_temp_file)
|
||||||
|
|
||||||
idx=0
|
idx=0
|
||||||
@@ -604,9 +555,8 @@ safe_clean() {
|
|||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Group and export paths
|
# Group dry-run paths by parent for a compact export list.
|
||||||
if [[ -f "$paths_temp" && -s "$paths_temp" ]]; then
|
if [[ -f "$paths_temp" && -s "$paths_temp" ]]; then
|
||||||
# Sort by parent directory to group children together
|
|
||||||
sort -t'|' -k1,1 "$paths_temp" | awk -F'|' '
|
sort -t'|' -k1,1 "$paths_temp" | awk -F'|' '
|
||||||
{
|
{
|
||||||
parent = $1
|
parent = $1
|
||||||
@@ -653,7 +603,6 @@ safe_clean() {
|
|||||||
|
|
||||||
start_cleanup() {
|
start_cleanup() {
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
# Avoid relying on TERM since CI often runs without it
|
|
||||||
printf '\033[2J\033[H'
|
printf '\033[2J\033[H'
|
||||||
fi
|
fi
|
||||||
printf '\n'
|
printf '\n'
|
||||||
@@ -669,7 +618,6 @@ start_cleanup() {
|
|||||||
echo ""
|
echo ""
|
||||||
SYSTEM_CLEAN=false
|
SYSTEM_CLEAN=false
|
||||||
|
|
||||||
# Initialize export list file
|
|
||||||
ensure_user_file "$EXPORT_LIST_FILE"
|
ensure_user_file "$EXPORT_LIST_FILE"
|
||||||
cat > "$EXPORT_LIST_FILE" << EOF
|
cat > "$EXPORT_LIST_FILE" << EOF
|
||||||
# Mole Cleanup Preview - $(date '+%Y-%m-%d %H:%M:%S')
|
# Mole Cleanup Preview - $(date '+%Y-%m-%d %H:%M:%S')
|
||||||
@@ -689,22 +637,19 @@ EOF
|
|||||||
if [[ -t 0 ]]; then
|
if [[ -t 0 ]]; then
|
||||||
echo -ne "${PURPLE}${ICON_ARROW}${NC} System caches need sudo — ${GREEN}Enter${NC} continue, ${GRAY}Space${NC} skip: "
|
echo -ne "${PURPLE}${ICON_ARROW}${NC} System caches need sudo — ${GREEN}Enter${NC} continue, ${GRAY}Space${NC} skip: "
|
||||||
|
|
||||||
# Use read_key to properly handle all key inputs
|
|
||||||
local choice
|
local choice
|
||||||
choice=$(read_key)
|
choice=$(read_key)
|
||||||
|
|
||||||
# Check for cancel (ESC or Q)
|
# ESC/Q aborts, Space skips, Enter enables system cleanup.
|
||||||
if [[ "$choice" == "QUIT" ]]; then
|
if [[ "$choice" == "QUIT" ]]; then
|
||||||
echo -e " ${GRAY}Canceled${NC}"
|
echo -e " ${GRAY}Canceled${NC}"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Space = skip
|
|
||||||
if [[ "$choice" == "SPACE" ]]; then
|
if [[ "$choice" == "SPACE" ]]; then
|
||||||
echo -e " ${GRAY}Skipped${NC}"
|
echo -e " ${GRAY}Skipped${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
SYSTEM_CLEAN=false
|
SYSTEM_CLEAN=false
|
||||||
# Enter = yes, do system cleanup
|
|
||||||
elif [[ "$choice" == "ENTER" ]]; then
|
elif [[ "$choice" == "ENTER" ]]; then
|
||||||
printf "\r\033[K" # Clear the prompt line
|
printf "\r\033[K" # Clear the prompt line
|
||||||
if ensure_sudo_session "System cleanup requires admin access"; then
|
if ensure_sudo_session "System cleanup requires admin access"; then
|
||||||
@@ -717,7 +662,6 @@ EOF
|
|||||||
echo -e "${YELLOW}Authentication failed${NC}, continuing with user-level cleanup"
|
echo -e "${YELLOW}Authentication failed${NC}, continuing with user-level cleanup"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Other keys (including arrow keys) = skip, no message needed
|
|
||||||
SYSTEM_CLEAN=false
|
SYSTEM_CLEAN=false
|
||||||
echo -e " ${GRAY}Skipped${NC}"
|
echo -e " ${GRAY}Skipped${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
@@ -732,10 +676,8 @@ EOF
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Clean Service Worker CacheStorage with domain protection
|
|
||||||
|
|
||||||
perform_cleanup() {
|
perform_cleanup() {
|
||||||
# Fast test mode for CI/testing - skip expensive scans
|
# Test mode skips expensive scans and returns minimal output.
|
||||||
local test_mode_enabled=false
|
local test_mode_enabled=false
|
||||||
if [[ "${MOLE_TEST_MODE:-0}" == "1" ]]; then
|
if [[ "${MOLE_TEST_MODE:-0}" == "1" ]]; then
|
||||||
test_mode_enabled=true
|
test_mode_enabled=true
|
||||||
@@ -743,10 +685,8 @@ perform_cleanup() {
|
|||||||
echo -e "${YELLOW}Dry Run Mode${NC} - Preview only, no deletions"
|
echo -e "${YELLOW}Dry Run Mode${NC} - Preview only, no deletions"
|
||||||
echo ""
|
echo ""
|
||||||
fi
|
fi
|
||||||
# Show minimal output to satisfy test assertions
|
|
||||||
echo -e "${GREEN}${ICON_LIST}${NC} User app cache"
|
echo -e "${GREEN}${ICON_LIST}${NC} User app cache"
|
||||||
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||||
# Check if any custom patterns exist (not defaults)
|
|
||||||
local -a expanded_defaults
|
local -a expanded_defaults
|
||||||
expanded_defaults=()
|
expanded_defaults=()
|
||||||
for default in "${DEFAULT_WHITELIST_PATTERNS[@]}"; do
|
for default in "${DEFAULT_WHITELIST_PATTERNS[@]}"; do
|
||||||
@@ -771,16 +711,13 @@ perform_cleanup() {
|
|||||||
total_items=1
|
total_items=1
|
||||||
files_cleaned=0
|
files_cleaned=0
|
||||||
total_size_cleaned=0
|
total_size_cleaned=0
|
||||||
# Don't return early - continue to summary block for debug log output
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "$test_mode_enabled" == "false" ]]; then
|
if [[ "$test_mode_enabled" == "false" ]]; then
|
||||||
echo -e "${BLUE}${ICON_ADMIN}${NC} $(detect_architecture) | Free space: $(get_free_space)"
|
echo -e "${BLUE}${ICON_ADMIN}${NC} $(detect_architecture) | Free space: $(get_free_space)"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Skip all expensive operations in test mode
|
|
||||||
if [[ "$test_mode_enabled" == "true" ]]; then
|
if [[ "$test_mode_enabled" == "true" ]]; then
|
||||||
# Jump to summary block
|
|
||||||
local summary_heading="Test mode complete"
|
local summary_heading="Test mode complete"
|
||||||
local -a summary_details
|
local -a summary_details
|
||||||
summary_details=()
|
summary_details=()
|
||||||
@@ -790,13 +727,10 @@ perform_cleanup() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Pre-check TCC permissions upfront (delegated to clean_caches module)
|
# Pre-check TCC permissions to avoid mid-run prompts.
|
||||||
check_tcc_permissions
|
check_tcc_permissions
|
||||||
|
|
||||||
# Show whitelist info if patterns are active
|
|
||||||
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
if [[ ${#WHITELIST_PATTERNS[@]} -gt 0 ]]; then
|
||||||
# Count predefined vs custom patterns
|
|
||||||
# Note: WHITELIST_PATTERNS are already expanded, need to expand defaults for comparison
|
|
||||||
local predefined_count=0
|
local predefined_count=0
|
||||||
local custom_count=0
|
local custom_count=0
|
||||||
|
|
||||||
@@ -817,7 +751,6 @@ perform_cleanup() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
# Display whitelist status
|
|
||||||
if [[ $custom_count -gt 0 || $predefined_count -gt 0 ]]; then
|
if [[ $custom_count -gt 0 || $predefined_count -gt 0 ]]; then
|
||||||
local summary=""
|
local summary=""
|
||||||
[[ $predefined_count -gt 0 ]] && summary+="$predefined_count core"
|
[[ $predefined_count -gt 0 ]] && summary+="$predefined_count core"
|
||||||
@@ -827,10 +760,8 @@ perform_cleanup() {
|
|||||||
|
|
||||||
echo -e "${BLUE}${ICON_SUCCESS}${NC} Whitelist: $summary"
|
echo -e "${BLUE}${ICON_SUCCESS}${NC} Whitelist: $summary"
|
||||||
|
|
||||||
# List all whitelist patterns in dry-run mode for verification (Issue #206)
|
|
||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
for pattern in "${WHITELIST_PATTERNS[@]}"; do
|
for pattern in "${WHITELIST_PATTERNS[@]}"; do
|
||||||
# Skip FINDER_METADATA sentinel
|
|
||||||
[[ "$pattern" == "$FINDER_METADATA_SENTINEL" ]] && continue
|
[[ "$pattern" == "$FINDER_METADATA_SENTINEL" ]] && continue
|
||||||
echo -e " ${GRAY}→ $pattern${NC}"
|
echo -e " ${GRAY}→ $pattern${NC}"
|
||||||
done
|
done
|
||||||
@@ -838,7 +769,6 @@ perform_cleanup() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Hint about Full Disk Access for better results (only if not already granted)
|
|
||||||
if [[ -t 1 && "$DRY_RUN" != "true" ]]; then
|
if [[ -t 1 && "$DRY_RUN" != "true" ]]; then
|
||||||
local fda_status=0
|
local fda_status=0
|
||||||
has_full_disk_access
|
has_full_disk_access
|
||||||
@@ -856,20 +786,17 @@ perform_cleanup() {
|
|||||||
local had_errexit=0
|
local had_errexit=0
|
||||||
[[ $- == *e* ]] && had_errexit=1
|
[[ $- == *e* ]] && had_errexit=1
|
||||||
|
|
||||||
# Allow cleanup functions to fail without exiting the script
|
# Allow per-section failures without aborting the full run.
|
||||||
# Individual operations use || true for granular error handling
|
|
||||||
set +e
|
set +e
|
||||||
|
|
||||||
# ===== 1. Deep system cleanup (if admin) - Do this first while sudo is fresh =====
|
# ===== 1. Deep system cleanup (if admin) =====
|
||||||
if [[ "$SYSTEM_CLEAN" == "true" ]]; then
|
if [[ "$SYSTEM_CLEAN" == "true" ]]; then
|
||||||
start_section "Deep system"
|
start_section "Deep system"
|
||||||
# Deep system cleanup (delegated to clean_system module)
|
|
||||||
clean_deep_system
|
clean_deep_system
|
||||||
clean_local_snapshots
|
clean_local_snapshots
|
||||||
end_section
|
end_section
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Show whitelist warnings if any
|
|
||||||
if [[ ${#WHITELIST_WARNINGS[@]} -gt 0 ]]; then
|
if [[ ${#WHITELIST_WARNINGS[@]} -gt 0 ]]; then
|
||||||
echo ""
|
echo ""
|
||||||
for warning in "${WHITELIST_WARNINGS[@]}"; do
|
for warning in "${WHITELIST_WARNINGS[@]}"; do
|
||||||
@@ -877,21 +804,17 @@ perform_cleanup() {
|
|||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# ===== 2. User essentials =====
|
|
||||||
start_section "User essentials"
|
start_section "User essentials"
|
||||||
# User essentials cleanup (delegated to clean_user_data module)
|
|
||||||
clean_user_essentials
|
clean_user_essentials
|
||||||
scan_external_volumes
|
scan_external_volumes
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
start_section "Finder metadata"
|
start_section "Finder metadata"
|
||||||
# Finder metadata cleanup (delegated to clean_user_data module)
|
|
||||||
clean_finder_metadata
|
clean_finder_metadata
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 3. macOS system caches =====
|
# ===== 3. macOS system caches =====
|
||||||
start_section "macOS system caches"
|
start_section "macOS system caches"
|
||||||
# macOS system caches cleanup (delegated to clean_user_data module)
|
|
||||||
clean_macos_system_caches
|
clean_macos_system_caches
|
||||||
clean_recent_items
|
clean_recent_items
|
||||||
clean_mail_downloads
|
clean_mail_downloads
|
||||||
@@ -899,55 +822,45 @@ perform_cleanup() {
|
|||||||
|
|
||||||
# ===== 4. Sandboxed app caches =====
|
# ===== 4. Sandboxed app caches =====
|
||||||
start_section "Sandboxed app caches"
|
start_section "Sandboxed app caches"
|
||||||
# Sandboxed app caches cleanup (delegated to clean_user_data module)
|
|
||||||
clean_sandboxed_app_caches
|
clean_sandboxed_app_caches
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 5. Browsers =====
|
# ===== 5. Browsers =====
|
||||||
start_section "Browsers"
|
start_section "Browsers"
|
||||||
# Browser caches cleanup (delegated to clean_user_data module)
|
|
||||||
clean_browsers
|
clean_browsers
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 6. Cloud storage =====
|
# ===== 6. Cloud storage =====
|
||||||
start_section "Cloud storage"
|
start_section "Cloud storage"
|
||||||
# Cloud storage caches cleanup (delegated to clean_user_data module)
|
|
||||||
clean_cloud_storage
|
clean_cloud_storage
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 7. Office applications =====
|
# ===== 7. Office applications =====
|
||||||
start_section "Office applications"
|
start_section "Office applications"
|
||||||
# Office applications cleanup (delegated to clean_user_data module)
|
|
||||||
clean_office_applications
|
clean_office_applications
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 8. Developer tools =====
|
# ===== 8. Developer tools =====
|
||||||
start_section "Developer tools"
|
start_section "Developer tools"
|
||||||
# Developer tools cleanup (delegated to clean_dev module)
|
|
||||||
clean_developer_tools
|
clean_developer_tools
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 9. Development applications =====
|
# ===== 9. Development applications =====
|
||||||
start_section "Development applications"
|
start_section "Development applications"
|
||||||
# User GUI applications cleanup (delegated to clean_user_apps module)
|
|
||||||
clean_user_gui_applications
|
clean_user_gui_applications
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 10. Virtualization tools =====
|
# ===== 10. Virtualization tools =====
|
||||||
start_section "Virtual machine tools"
|
start_section "Virtual machine tools"
|
||||||
# Virtualization tools cleanup (delegated to clean_user_data module)
|
|
||||||
clean_virtualization_tools
|
clean_virtualization_tools
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 11. Application Support logs and caches cleanup =====
|
# ===== 11. Application Support logs and caches cleanup =====
|
||||||
start_section "Application Support"
|
start_section "Application Support"
|
||||||
# Clean logs, Service Worker caches, Code Cache, Crashpad, stale updates, Group Containers
|
|
||||||
clean_application_support_logs
|
clean_application_support_logs
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 12. Orphaned app data cleanup =====
|
# ===== 12. Orphaned app data cleanup (60+ days inactive, skip protected vendors) =====
|
||||||
# Only touch apps missing from scan + 60+ days inactive
|
|
||||||
# Skip protected vendors, keep Preferences/Application Support
|
|
||||||
start_section "Uninstalled app data"
|
start_section "Uninstalled app data"
|
||||||
clean_orphaned_app_data
|
clean_orphaned_app_data
|
||||||
end_section
|
end_section
|
||||||
@@ -957,13 +870,11 @@ perform_cleanup() {
|
|||||||
|
|
||||||
# ===== 14. iOS device backups =====
|
# ===== 14. iOS device backups =====
|
||||||
start_section "iOS device backups"
|
start_section "iOS device backups"
|
||||||
# iOS device backups check (delegated to clean_user_data module)
|
|
||||||
check_ios_device_backups
|
check_ios_device_backups
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
# ===== 15. Time Machine incomplete backups =====
|
# ===== 15. Time Machine incomplete backups =====
|
||||||
start_section "Time Machine incomplete backups"
|
start_section "Time Machine incomplete backups"
|
||||||
# Time Machine incomplete backups cleanup (delegated to clean_system module)
|
|
||||||
clean_time_machine_failed_backups
|
clean_time_machine_failed_backups
|
||||||
end_section
|
end_section
|
||||||
|
|
||||||
@@ -972,11 +883,11 @@ perform_cleanup() {
|
|||||||
|
|
||||||
local summary_heading=""
|
local summary_heading=""
|
||||||
local summary_status="success"
|
local summary_status="success"
|
||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
summary_heading="Dry run complete - no changes made"
|
summary_heading="Dry run complete - no changes made"
|
||||||
else
|
else
|
||||||
summary_heading="Cleanup complete"
|
summary_heading="Cleanup complete"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local -a summary_details=()
|
local -a summary_details=()
|
||||||
|
|
||||||
@@ -985,13 +896,11 @@ perform_cleanup() {
|
|||||||
freed_gb=$(echo "$total_size_cleaned" | awk '{printf "%.2f", $1/1024/1024}')
|
freed_gb=$(echo "$total_size_cleaned" | awk '{printf "%.2f", $1/1024/1024}')
|
||||||
|
|
||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
# Build compact stats line for dry run
|
|
||||||
local stats="Potential space: ${GREEN}${freed_gb}GB${NC}"
|
local stats="Potential space: ${GREEN}${freed_gb}GB${NC}"
|
||||||
[[ $files_cleaned -gt 0 ]] && stats+=" | Items: $files_cleaned"
|
[[ $files_cleaned -gt 0 ]] && stats+=" | Items: $files_cleaned"
|
||||||
[[ $total_items -gt 0 ]] && stats+=" | Categories: $total_items"
|
[[ $total_items -gt 0 ]] && stats+=" | Categories: $total_items"
|
||||||
summary_details+=("$stats")
|
summary_details+=("$stats")
|
||||||
|
|
||||||
# Add summary to export file
|
|
||||||
{
|
{
|
||||||
echo ""
|
echo ""
|
||||||
echo "# ============================================"
|
echo "# ============================================"
|
||||||
@@ -1005,7 +914,6 @@ perform_cleanup() {
|
|||||||
summary_details+=("Detailed file list: ${GRAY}$EXPORT_LIST_FILE${NC}")
|
summary_details+=("Detailed file list: ${GRAY}$EXPORT_LIST_FILE${NC}")
|
||||||
summary_details+=("Use ${GRAY}mo clean --whitelist${NC} to add protection rules")
|
summary_details+=("Use ${GRAY}mo clean --whitelist${NC} to add protection rules")
|
||||||
else
|
else
|
||||||
# Build summary line: Space freed + Items cleaned
|
|
||||||
local summary_line="Space freed: ${GREEN}${freed_gb}GB${NC}"
|
local summary_line="Space freed: ${GREEN}${freed_gb}GB${NC}"
|
||||||
|
|
||||||
if [[ $files_cleaned -gt 0 && $total_items -gt 0 ]]; then
|
if [[ $files_cleaned -gt 0 && $total_items -gt 0 ]]; then
|
||||||
@@ -1026,7 +934,6 @@ perform_cleanup() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Free space now at the end
|
|
||||||
local final_free_space=$(get_free_space)
|
local final_free_space=$(get_free_space)
|
||||||
summary_details+=("Free space now: $final_free_space")
|
summary_details+=("Free space now: $final_free_space")
|
||||||
fi
|
fi
|
||||||
@@ -1040,7 +947,6 @@ perform_cleanup() {
|
|||||||
summary_details+=("Free space now: $(get_free_space)")
|
summary_details+=("Free space now: $(get_free_space)")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Restore strict error handling only if it was enabled
|
|
||||||
if [[ $had_errexit -eq 1 ]]; then
|
if [[ $had_errexit -eq 1 ]]; then
|
||||||
set -e
|
set -e
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,15 +1,18 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# Mole - Optimize command.
|
||||||
|
# Runs system maintenance checks and fixes.
|
||||||
|
# Supports dry-run where applicable.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# Fix locale issues (Issue #83)
|
# Fix locale issues.
|
||||||
export LC_ALL=C
|
export LC_ALL=C
|
||||||
export LANG=C
|
export LANG=C
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||||
source "$SCRIPT_DIR/lib/core/common.sh"
|
source "$SCRIPT_DIR/lib/core/common.sh"
|
||||||
|
|
||||||
# Set up cleanup trap for temporary files
|
# Clean temp files on exit.
|
||||||
trap cleanup_temp_files EXIT INT TERM
|
trap cleanup_temp_files EXIT INT TERM
|
||||||
source "$SCRIPT_DIR/lib/core/sudo.sh"
|
source "$SCRIPT_DIR/lib/core/sudo.sh"
|
||||||
source "$SCRIPT_DIR/lib/manage/update.sh"
|
source "$SCRIPT_DIR/lib/manage/update.sh"
|
||||||
@@ -26,7 +29,7 @@ print_header() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
run_system_checks() {
|
run_system_checks() {
|
||||||
# Skip system checks in dry-run mode (only show what optimizations would run)
|
# Skip checks in dry-run mode.
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
@@ -36,7 +39,6 @@ run_system_checks() {
|
|||||||
unset MOLE_SECURITY_FIXES_SKIPPED
|
unset MOLE_SECURITY_FIXES_SKIPPED
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# Run checks and display results directly without grouping
|
|
||||||
check_all_updates
|
check_all_updates
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
@@ -152,7 +154,7 @@ touchid_supported() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Fallback: Apple Silicon Macs usually have Touch ID
|
# Fallback: Apple Silicon Macs usually have Touch ID.
|
||||||
if [[ "$(uname -m)" == "arm64" ]]; then
|
if [[ "$(uname -m)" == "arm64" ]]; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
@@ -354,7 +356,7 @@ main() {
|
|||||||
fi
|
fi
|
||||||
print_header
|
print_header
|
||||||
|
|
||||||
# Show dry-run mode indicator
|
# Dry-run indicator.
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||||
echo -e "${YELLOW}${ICON_DRY_RUN} DRY RUN MODE${NC} - No files will be modified\n"
|
echo -e "${YELLOW}${ICON_DRY_RUN} DRY RUN MODE${NC} - No files will be modified\n"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Mole - Project purge command (mo purge)
|
# Mole - Purge command.
|
||||||
# Remove old project build artifacts and dependencies
|
# Cleans heavy project build artifacts.
|
||||||
|
# Interactive selection by project.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Entry point for the Go-based system status panel bundled with Mole.
|
# Mole - Status command.
|
||||||
|
# Runs the Go system status panel.
|
||||||
|
# Shows live system metrics.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Mole - Touch ID Configuration Helper
|
# Mole - Touch ID command.
|
||||||
# Automatically configure Touch ID for sudo
|
# Configures sudo with Touch ID.
|
||||||
|
# Guided toggle with safety checks.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
|
|||||||
@@ -1,28 +1,25 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Mole - Uninstall Module
|
# Mole - Uninstall command.
|
||||||
# Interactive application uninstaller with keyboard navigation
|
# Interactive app uninstaller.
|
||||||
#
|
# Removes app files and leftovers.
|
||||||
# Usage:
|
|
||||||
# uninstall.sh # Launch interactive uninstaller
|
|
||||||
# uninstall.sh # Launch interactive uninstaller
|
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# Fix locale issues (avoid Perl warnings on non-English systems)
|
# Fix locale issues on non-English systems.
|
||||||
export LC_ALL=C
|
export LC_ALL=C
|
||||||
export LANG=C
|
export LANG=C
|
||||||
|
|
||||||
# Get script directory and source common functions
|
# Load shared helpers.
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
source "$SCRIPT_DIR/../lib/core/common.sh"
|
source "$SCRIPT_DIR/../lib/core/common.sh"
|
||||||
|
|
||||||
# Set up cleanup trap for temporary files
|
# Clean temp files on exit.
|
||||||
trap cleanup_temp_files EXIT INT TERM
|
trap cleanup_temp_files EXIT INT TERM
|
||||||
source "$SCRIPT_DIR/../lib/ui/menu_paginated.sh"
|
source "$SCRIPT_DIR/../lib/ui/menu_paginated.sh"
|
||||||
source "$SCRIPT_DIR/../lib/ui/app_selector.sh"
|
source "$SCRIPT_DIR/../lib/ui/app_selector.sh"
|
||||||
source "$SCRIPT_DIR/../lib/uninstall/batch.sh"
|
source "$SCRIPT_DIR/../lib/uninstall/batch.sh"
|
||||||
|
|
||||||
# Initialize global variables
|
# State
|
||||||
selected_apps=()
|
selected_apps=()
|
||||||
declare -a apps_data=()
|
declare -a apps_data=()
|
||||||
declare -a selection_state=()
|
declare -a selection_state=()
|
||||||
@@ -30,10 +27,9 @@ total_items=0
|
|||||||
files_cleaned=0
|
files_cleaned=0
|
||||||
total_size_cleaned=0
|
total_size_cleaned=0
|
||||||
|
|
||||||
# Scan applications and collect information
|
# Scan applications and collect information.
|
||||||
scan_applications() {
|
scan_applications() {
|
||||||
# Application scan with intelligent caching (24h TTL)
|
# Cache app scan (24h TTL).
|
||||||
# This speeds up repeated scans significantly by caching app metadata
|
|
||||||
local cache_dir="$HOME/.cache/mole"
|
local cache_dir="$HOME/.cache/mole"
|
||||||
local cache_file="$cache_dir/app_scan_cache"
|
local cache_file="$cache_dir/app_scan_cache"
|
||||||
local cache_ttl=86400 # 24 hours
|
local cache_ttl=86400 # 24 hours
|
||||||
@@ -41,12 +37,10 @@ scan_applications() {
|
|||||||
|
|
||||||
ensure_user_dir "$cache_dir"
|
ensure_user_dir "$cache_dir"
|
||||||
|
|
||||||
# Check if cache exists and is fresh
|
|
||||||
if [[ $force_rescan == false && -f "$cache_file" ]]; then
|
if [[ $force_rescan == false && -f "$cache_file" ]]; then
|
||||||
local cache_age=$(($(date +%s) - $(get_file_mtime "$cache_file")))
|
local cache_age=$(($(date +%s) - $(get_file_mtime "$cache_file")))
|
||||||
[[ $cache_age -eq $(date +%s) ]] && cache_age=86401 # Handle mtime read failure
|
[[ $cache_age -eq $(date +%s) ]] && cache_age=86401 # Handle mtime read failure
|
||||||
if [[ $cache_age -lt $cache_ttl ]]; then
|
if [[ $cache_age -lt $cache_ttl ]]; then
|
||||||
# Cache hit - show brief feedback and return cached results
|
|
||||||
if [[ -t 2 ]]; then
|
if [[ -t 2 ]]; then
|
||||||
echo -e "${GREEN}Loading from cache...${NC}" >&2
|
echo -e "${GREEN}Loading from cache...${NC}" >&2
|
||||||
sleep 0.3 # Brief pause so user sees the message
|
sleep 0.3 # Brief pause so user sees the message
|
||||||
@@ -56,7 +50,6 @@ scan_applications() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Cache miss - perform full scan
|
|
||||||
local inline_loading=false
|
local inline_loading=false
|
||||||
if [[ -t 1 && -t 2 ]]; then
|
if [[ -t 1 && -t 2 ]]; then
|
||||||
inline_loading=true
|
inline_loading=true
|
||||||
@@ -66,12 +59,10 @@ scan_applications() {
|
|||||||
local temp_file
|
local temp_file
|
||||||
temp_file=$(create_temp_file)
|
temp_file=$(create_temp_file)
|
||||||
|
|
||||||
# Pre-cache current epoch to avoid repeated date calls
|
|
||||||
local current_epoch
|
local current_epoch
|
||||||
current_epoch=$(date "+%s")
|
current_epoch=$(date "+%s")
|
||||||
|
|
||||||
# First pass: quickly collect all valid app paths and bundle IDs
|
# Pass 1: collect app paths and bundle IDs (no mdls).
|
||||||
# This pass does NOT call mdls (slow) - only reads plists (fast)
|
|
||||||
local -a app_data_tuples=()
|
local -a app_data_tuples=()
|
||||||
local -a app_dirs=(
|
local -a app_dirs=(
|
||||||
"/Applications"
|
"/Applications"
|
||||||
@@ -104,37 +95,31 @@ scan_applications() {
|
|||||||
local app_name
|
local app_name
|
||||||
app_name=$(basename "$app_path" .app)
|
app_name=$(basename "$app_path" .app)
|
||||||
|
|
||||||
# Skip nested apps (e.g. inside Wrapper/ or Frameworks/ of another app)
|
# Skip nested apps inside another .app bundle.
|
||||||
# Check if parent path component ends in .app (e.g. /Foo.app/Bar.app or /Foo.app/Contents/Bar.app)
|
|
||||||
# This prevents false positives like /Old.apps/Target.app
|
|
||||||
local parent_dir
|
local parent_dir
|
||||||
parent_dir=$(dirname "$app_path")
|
parent_dir=$(dirname "$app_path")
|
||||||
if [[ "$parent_dir" == *".app" || "$parent_dir" == *".app/"* ]]; then
|
if [[ "$parent_dir" == *".app" || "$parent_dir" == *".app/"* ]]; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Get bundle ID (fast plist read, no mdls call yet)
|
# Bundle ID from plist (fast path).
|
||||||
local bundle_id="unknown"
|
local bundle_id="unknown"
|
||||||
if [[ -f "$app_path/Contents/Info.plist" ]]; then
|
if [[ -f "$app_path/Contents/Info.plist" ]]; then
|
||||||
bundle_id=$(defaults read "$app_path/Contents/Info.plist" CFBundleIdentifier 2> /dev/null || echo "unknown")
|
bundle_id=$(defaults read "$app_path/Contents/Info.plist" CFBundleIdentifier 2> /dev/null || echo "unknown")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Skip system critical apps (input methods, system components, etc.)
|
|
||||||
if should_protect_from_uninstall "$bundle_id"; then
|
if should_protect_from_uninstall "$bundle_id"; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Store tuple: app_path|app_name|bundle_id
|
# Store tuple for pass 2 (metadata + size).
|
||||||
# Display name and metadata will be resolved in parallel later (second pass)
|
|
||||||
app_data_tuples+=("${app_path}|${app_name}|${bundle_id}")
|
app_data_tuples+=("${app_path}|${app_name}|${bundle_id}")
|
||||||
done < <(command find "$app_dir" -name "*.app" -maxdepth 3 -print0 2> /dev/null)
|
done < <(command find "$app_dir" -name "*.app" -maxdepth 3 -print0 2> /dev/null)
|
||||||
done
|
done
|
||||||
|
|
||||||
# Second pass: process each app with parallel metadata extraction
|
# Pass 2: metadata + size in parallel (mdls is slow).
|
||||||
# This pass calls mdls (slow) and calculates sizes, but does so in parallel
|
|
||||||
local app_count=0
|
local app_count=0
|
||||||
local total_apps=${#app_data_tuples[@]}
|
local total_apps=${#app_data_tuples[@]}
|
||||||
# Bound parallelism - for metadata queries, can go higher since it's mostly waiting
|
|
||||||
local max_parallel
|
local max_parallel
|
||||||
max_parallel=$(get_optimal_parallel_jobs "io")
|
max_parallel=$(get_optimal_parallel_jobs "io")
|
||||||
if [[ $max_parallel -lt 8 ]]; then
|
if [[ $max_parallel -lt 8 ]]; then
|
||||||
@@ -151,25 +136,17 @@ scan_applications() {
|
|||||||
|
|
||||||
IFS='|' read -r app_path app_name bundle_id <<< "$app_data_tuple"
|
IFS='|' read -r app_path app_name bundle_id <<< "$app_data_tuple"
|
||||||
|
|
||||||
# Get localized display name (moved from first pass for better performance)
|
# Display name priority: mdls display name → bundle display → bundle name → folder.
|
||||||
# Priority order for name selection (prefer localized names):
|
|
||||||
# 1. System metadata display name (kMDItemDisplayName) - respects system language
|
|
||||||
# 2. CFBundleDisplayName - usually localized
|
|
||||||
# 3. CFBundleName - fallback
|
|
||||||
# 4. App folder name - last resort
|
|
||||||
local display_name="$app_name"
|
local display_name="$app_name"
|
||||||
if [[ -f "$app_path/Contents/Info.plist" ]]; then
|
if [[ -f "$app_path/Contents/Info.plist" ]]; then
|
||||||
# Try to get localized name from system metadata (best for i18n)
|
|
||||||
local md_display_name
|
local md_display_name
|
||||||
md_display_name=$(run_with_timeout 0.05 mdls -name kMDItemDisplayName -raw "$app_path" 2> /dev/null || echo "")
|
md_display_name=$(run_with_timeout 0.05 mdls -name kMDItemDisplayName -raw "$app_path" 2> /dev/null || echo "")
|
||||||
|
|
||||||
# Get bundle names from plist
|
|
||||||
local bundle_display_name
|
local bundle_display_name
|
||||||
bundle_display_name=$(plutil -extract CFBundleDisplayName raw "$app_path/Contents/Info.plist" 2> /dev/null)
|
bundle_display_name=$(plutil -extract CFBundleDisplayName raw "$app_path/Contents/Info.plist" 2> /dev/null)
|
||||||
local bundle_name
|
local bundle_name
|
||||||
bundle_name=$(plutil -extract CFBundleName raw "$app_path/Contents/Info.plist" 2> /dev/null)
|
bundle_name=$(plutil -extract CFBundleName raw "$app_path/Contents/Info.plist" 2> /dev/null)
|
||||||
|
|
||||||
# Sanitize metadata values (prevent paths, pipes, and newlines)
|
|
||||||
if [[ "$md_display_name" == /* ]]; then md_display_name=""; fi
|
if [[ "$md_display_name" == /* ]]; then md_display_name=""; fi
|
||||||
md_display_name="${md_display_name//|/-}"
|
md_display_name="${md_display_name//|/-}"
|
||||||
md_display_name="${md_display_name//[$'\t\r\n']/}"
|
md_display_name="${md_display_name//[$'\t\r\n']/}"
|
||||||
@@ -180,7 +157,6 @@ scan_applications() {
|
|||||||
bundle_name="${bundle_name//|/-}"
|
bundle_name="${bundle_name//|/-}"
|
||||||
bundle_name="${bundle_name//[$'\t\r\n']/}"
|
bundle_name="${bundle_name//[$'\t\r\n']/}"
|
||||||
|
|
||||||
# Select best available name
|
|
||||||
if [[ -n "$md_display_name" && "$md_display_name" != "(null)" && "$md_display_name" != "$app_name" ]]; then
|
if [[ -n "$md_display_name" && "$md_display_name" != "(null)" && "$md_display_name" != "$app_name" ]]; then
|
||||||
display_name="$md_display_name"
|
display_name="$md_display_name"
|
||||||
elif [[ -n "$bundle_display_name" && "$bundle_display_name" != "(null)" ]]; then
|
elif [[ -n "$bundle_display_name" && "$bundle_display_name" != "(null)" ]]; then
|
||||||
@@ -190,29 +166,25 @@ scan_applications() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Final safety check: if display_name looks like a path, revert to app_name
|
|
||||||
if [[ "$display_name" == /* ]]; then
|
if [[ "$display_name" == /* ]]; then
|
||||||
display_name="$app_name"
|
display_name="$app_name"
|
||||||
fi
|
fi
|
||||||
# Ensure no pipes or newlines in final display name
|
|
||||||
display_name="${display_name//|/-}"
|
display_name="${display_name//|/-}"
|
||||||
display_name="${display_name//[$'\t\r\n']/}"
|
display_name="${display_name//[$'\t\r\n']/}"
|
||||||
|
|
||||||
# Calculate app size (in parallel for performance)
|
# App size (KB → human).
|
||||||
local app_size="N/A"
|
local app_size="N/A"
|
||||||
local app_size_kb="0"
|
local app_size_kb="0"
|
||||||
if [[ -d "$app_path" ]]; then
|
if [[ -d "$app_path" ]]; then
|
||||||
# Get size in KB, then format for display
|
|
||||||
app_size_kb=$(get_path_size_kb "$app_path")
|
app_size_kb=$(get_path_size_kb "$app_path")
|
||||||
app_size=$(bytes_to_human "$((app_size_kb * 1024))")
|
app_size=$(bytes_to_human "$((app_size_kb * 1024))")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Get last used date with fallback strategy
|
# Last used: mdls (fast timeout) → mtime.
|
||||||
local last_used="Never"
|
local last_used="Never"
|
||||||
local last_used_epoch=0
|
local last_used_epoch=0
|
||||||
|
|
||||||
if [[ -d "$app_path" ]]; then
|
if [[ -d "$app_path" ]]; then
|
||||||
# Try mdls first with short timeout (0.1s) for accuracy, fallback to mtime for speed
|
|
||||||
local metadata_date
|
local metadata_date
|
||||||
metadata_date=$(run_with_timeout 0.1 mdls -name kMDItemLastUsedDate -raw "$app_path" 2> /dev/null || echo "")
|
metadata_date=$(run_with_timeout 0.1 mdls -name kMDItemLastUsedDate -raw "$app_path" 2> /dev/null || echo "")
|
||||||
|
|
||||||
@@ -220,7 +192,6 @@ scan_applications() {
|
|||||||
last_used_epoch=$(date -j -f "%Y-%m-%d %H:%M:%S %z" "$metadata_date" "+%s" 2> /dev/null || echo "0")
|
last_used_epoch=$(date -j -f "%Y-%m-%d %H:%M:%S %z" "$metadata_date" "+%s" 2> /dev/null || echo "0")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Fallback if mdls failed or returned nothing
|
|
||||||
if [[ "$last_used_epoch" -eq 0 ]]; then
|
if [[ "$last_used_epoch" -eq 0 ]]; then
|
||||||
last_used_epoch=$(get_file_mtime "$app_path")
|
last_used_epoch=$(get_file_mtime "$app_path")
|
||||||
fi
|
fi
|
||||||
@@ -276,7 +247,6 @@ scan_applications() {
|
|||||||
) &
|
) &
|
||||||
spinner_pid=$!
|
spinner_pid=$!
|
||||||
|
|
||||||
# Process apps in parallel batches
|
|
||||||
for app_data_tuple in "${app_data_tuples[@]}"; do
|
for app_data_tuple in "${app_data_tuples[@]}"; do
|
||||||
((app_count++))
|
((app_count++))
|
||||||
process_app_metadata "$app_data_tuple" "$temp_file" "$current_epoch" &
|
process_app_metadata "$app_data_tuple" "$temp_file" "$current_epoch" &
|
||||||
@@ -368,7 +338,7 @@ load_applications() {
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
# Cleanup function - restore cursor and clean up
|
# Cleanup: restore cursor and kill keepalive.
|
||||||
cleanup() {
|
cleanup() {
|
||||||
if [[ "${MOLE_ALT_SCREEN_ACTIVE:-}" == "1" ]]; then
|
if [[ "${MOLE_ALT_SCREEN_ACTIVE:-}" == "1" ]]; then
|
||||||
leave_alt_screen
|
leave_alt_screen
|
||||||
@@ -387,7 +357,7 @@ trap cleanup EXIT INT TERM
|
|||||||
|
|
||||||
main() {
|
main() {
|
||||||
local force_rescan=false
|
local force_rescan=false
|
||||||
# Parse global flags locally if needed (currently none specific to uninstall)
|
# Global flags
|
||||||
for arg in "$@"; do
|
for arg in "$@"; do
|
||||||
case "$arg" in
|
case "$arg" in
|
||||||
"--debug")
|
"--debug")
|
||||||
@@ -403,7 +373,6 @@ main() {
|
|||||||
|
|
||||||
hide_cursor
|
hide_cursor
|
||||||
|
|
||||||
# Main interaction loop
|
|
||||||
while true; do
|
while true; do
|
||||||
local needs_scanning=true
|
local needs_scanning=true
|
||||||
local cache_file="$HOME/.cache/mole/app_scan_cache"
|
local cache_file="$HOME/.cache/mole/app_scan_cache"
|
||||||
|
|||||||
@@ -75,11 +75,6 @@ func TestScanPathConcurrentBasic(t *testing.T) {
|
|||||||
if bytes := atomic.LoadInt64(&bytesScanned); bytes == 0 {
|
if bytes := atomic.LoadInt64(&bytesScanned); bytes == 0 {
|
||||||
t.Fatalf("expected byte counter to increase")
|
t.Fatalf("expected byte counter to increase")
|
||||||
}
|
}
|
||||||
// current path update is throttled, so it might be empty for small scans
|
|
||||||
// if current == "" {
|
|
||||||
// t.Fatalf("expected current path to be updated")
|
|
||||||
// }
|
|
||||||
|
|
||||||
foundSymlink := false
|
foundSymlink := false
|
||||||
for _, entry := range result.Entries {
|
for _, entry := range result.Entries {
|
||||||
if strings.HasSuffix(entry.Name, " →") {
|
if strings.HasSuffix(entry.Name, " →") {
|
||||||
@@ -148,7 +143,7 @@ func TestOverviewStoreAndLoad(t *testing.T) {
|
|||||||
t.Fatalf("snapshot mismatch: want %d, got %d", want, got)
|
t.Fatalf("snapshot mismatch: want %d, got %d", want, got)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Force reload from disk and ensure value persists.
|
// Reload from disk and ensure value persists.
|
||||||
resetOverviewSnapshotForTest()
|
resetOverviewSnapshotForTest()
|
||||||
got, err = loadStoredOverviewSize(path)
|
got, err = loadStoredOverviewSize(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -220,7 +215,7 @@ func TestMeasureOverviewSize(t *testing.T) {
|
|||||||
t.Fatalf("expected positive size, got %d", size)
|
t.Fatalf("expected positive size, got %d", size)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ensure snapshot stored
|
// Ensure snapshot stored.
|
||||||
cached, err := loadStoredOverviewSize(target)
|
cached, err := loadStoredOverviewSize(target)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("loadStoredOverviewSize: %v", err)
|
t.Fatalf("loadStoredOverviewSize: %v", err)
|
||||||
@@ -279,13 +274,13 @@ func TestLoadCacheExpiresWhenDirectoryChanges(t *testing.T) {
|
|||||||
t.Fatalf("saveCacheToDisk: %v", err)
|
t.Fatalf("saveCacheToDisk: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Touch directory to advance mtime beyond grace period.
|
// Advance mtime beyond grace period.
|
||||||
time.Sleep(time.Millisecond * 10)
|
time.Sleep(time.Millisecond * 10)
|
||||||
if err := os.Chtimes(target, time.Now(), time.Now()); err != nil {
|
if err := os.Chtimes(target, time.Now(), time.Now()); err != nil {
|
||||||
t.Fatalf("chtimes: %v", err)
|
t.Fatalf("chtimes: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Force modtime difference beyond grace window by simulating an older cache entry.
|
// Simulate older cache entry to exceed grace window.
|
||||||
cachePath, err := getCachePath(target)
|
cachePath, err := getCachePath(target)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("getCachePath: %v", err)
|
t.Fatalf("getCachePath: %v", err)
|
||||||
@@ -335,24 +330,24 @@ func TestScanPathPermissionError(t *testing.T) {
|
|||||||
t.Fatalf("create locked dir: %v", err)
|
t.Fatalf("create locked dir: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create a file inside before locking, just to be sure
|
// Create a file before locking.
|
||||||
if err := os.WriteFile(filepath.Join(lockedDir, "secret.txt"), []byte("shh"), 0o644); err != nil {
|
if err := os.WriteFile(filepath.Join(lockedDir, "secret.txt"), []byte("shh"), 0o644); err != nil {
|
||||||
t.Fatalf("write secret: %v", err)
|
t.Fatalf("write secret: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove permissions
|
// Remove permissions.
|
||||||
if err := os.Chmod(lockedDir, 0o000); err != nil {
|
if err := os.Chmod(lockedDir, 0o000); err != nil {
|
||||||
t.Fatalf("chmod 000: %v", err)
|
t.Fatalf("chmod 000: %v", err)
|
||||||
}
|
}
|
||||||
defer func() {
|
defer func() {
|
||||||
// Restore permissions so cleanup can work
|
// Restore permissions for cleanup.
|
||||||
_ = os.Chmod(lockedDir, 0o755)
|
_ = os.Chmod(lockedDir, 0o755)
|
||||||
}()
|
}()
|
||||||
|
|
||||||
var files, dirs, bytes int64
|
var files, dirs, bytes int64
|
||||||
current := ""
|
current := ""
|
||||||
|
|
||||||
// Scanning the locked dir itself should fail
|
// Scanning the locked dir itself should fail.
|
||||||
_, err := scanPathConcurrent(lockedDir, &files, &dirs, &bytes, ¤t)
|
_, err := scanPathConcurrent(lockedDir, &files, &dirs, &bytes, ¤t)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatalf("expected error scanning locked directory, got nil")
|
t.Fatalf("expected error scanning locked directory, got nil")
|
||||||
|
|||||||
@@ -222,7 +222,7 @@ func loadCacheFromDisk(path string) (*cacheEntry, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if info.ModTime().After(entry.ModTime) {
|
if info.ModTime().After(entry.ModTime) {
|
||||||
// Only expire cache if the directory has been newer for longer than the grace window.
|
// Allow grace window.
|
||||||
if cacheModTimeGrace <= 0 || info.ModTime().Sub(entry.ModTime) > cacheModTimeGrace {
|
if cacheModTimeGrace <= 0 || info.ModTime().Sub(entry.ModTime) > cacheModTimeGrace {
|
||||||
return nil, fmt.Errorf("cache expired: directory modified")
|
return nil, fmt.Errorf("cache expired: directory modified")
|
||||||
}
|
}
|
||||||
@@ -290,29 +290,23 @@ func removeOverviewSnapshot(path string) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// prefetchOverviewCache scans overview directories in background
|
// prefetchOverviewCache warms overview cache in background.
|
||||||
// to populate cache for faster overview mode access
|
|
||||||
func prefetchOverviewCache(ctx context.Context) {
|
func prefetchOverviewCache(ctx context.Context) {
|
||||||
entries := createOverviewEntries()
|
entries := createOverviewEntries()
|
||||||
|
|
||||||
// Check which entries need refresh
|
|
||||||
var needScan []string
|
var needScan []string
|
||||||
for _, entry := range entries {
|
for _, entry := range entries {
|
||||||
// Skip if we have fresh cache
|
|
||||||
if size, err := loadStoredOverviewSize(entry.Path); err == nil && size > 0 {
|
if size, err := loadStoredOverviewSize(entry.Path); err == nil && size > 0 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
needScan = append(needScan, entry.Path)
|
needScan = append(needScan, entry.Path)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Nothing to scan
|
|
||||||
if len(needScan) == 0 {
|
if len(needScan) == 0 {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Scan and cache in background with context cancellation support
|
|
||||||
for _, path := range needScan {
|
for _, path := range needScan {
|
||||||
// Check if context is cancelled
|
|
||||||
select {
|
select {
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -5,23 +5,20 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
// isCleanableDir checks if a directory is safe to manually delete
|
// isCleanableDir marks paths safe to delete manually (not handled by mo clean).
|
||||||
// but NOT cleaned by mo clean (so user might want to delete it manually)
|
|
||||||
func isCleanableDir(path string) bool {
|
func isCleanableDir(path string) bool {
|
||||||
if path == "" {
|
if path == "" {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Exclude paths that mo clean will handle automatically
|
// Exclude paths mo clean already handles.
|
||||||
// These are system caches/logs that mo clean already processes
|
|
||||||
if isHandledByMoClean(path) {
|
if isHandledByMoClean(path) {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
baseName := filepath.Base(path)
|
baseName := filepath.Base(path)
|
||||||
|
|
||||||
// Only mark project dependencies and build outputs
|
// Project dependencies and build outputs are safe.
|
||||||
// These are safe to delete but mo clean won't touch them
|
|
||||||
if projectDependencyDirs[baseName] {
|
if projectDependencyDirs[baseName] {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
@@ -29,9 +26,8 @@ func isCleanableDir(path string) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// isHandledByMoClean checks if this path will be cleaned by mo clean
|
// isHandledByMoClean checks if a path is cleaned by mo clean.
|
||||||
func isHandledByMoClean(path string) bool {
|
func isHandledByMoClean(path string) bool {
|
||||||
// Paths that mo clean handles (from clean.sh)
|
|
||||||
cleanPaths := []string{
|
cleanPaths := []string{
|
||||||
"/Library/Caches/",
|
"/Library/Caches/",
|
||||||
"/Library/Logs/",
|
"/Library/Logs/",
|
||||||
@@ -49,16 +45,15 @@ func isHandledByMoClean(path string) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Project dependency and build directories
|
// Project dependency and build directories.
|
||||||
// These are safe to delete manually but mo clean won't touch them
|
|
||||||
var projectDependencyDirs = map[string]bool{
|
var projectDependencyDirs = map[string]bool{
|
||||||
// JavaScript/Node dependencies
|
// JavaScript/Node.
|
||||||
"node_modules": true,
|
"node_modules": true,
|
||||||
"bower_components": true,
|
"bower_components": true,
|
||||||
".yarn": true, // Yarn local cache
|
".yarn": true,
|
||||||
".pnpm-store": true, // pnpm store
|
".pnpm-store": true,
|
||||||
|
|
||||||
// Python dependencies and outputs
|
// Python.
|
||||||
"venv": true,
|
"venv": true,
|
||||||
".venv": true,
|
".venv": true,
|
||||||
"virtualenv": true,
|
"virtualenv": true,
|
||||||
@@ -68,18 +63,18 @@ var projectDependencyDirs = map[string]bool{
|
|||||||
".ruff_cache": true,
|
".ruff_cache": true,
|
||||||
".tox": true,
|
".tox": true,
|
||||||
".eggs": true,
|
".eggs": true,
|
||||||
"htmlcov": true, // Coverage reports
|
"htmlcov": true,
|
||||||
".ipynb_checkpoints": true, // Jupyter checkpoints
|
".ipynb_checkpoints": true,
|
||||||
|
|
||||||
// Ruby dependencies
|
// Ruby.
|
||||||
"vendor": true,
|
"vendor": true,
|
||||||
".bundle": true,
|
".bundle": true,
|
||||||
|
|
||||||
// Java/Kotlin/Scala
|
// Java/Kotlin/Scala.
|
||||||
".gradle": true, // Project-level Gradle cache
|
".gradle": true,
|
||||||
"out": true, // IntelliJ IDEA build output
|
"out": true,
|
||||||
|
|
||||||
// Build outputs (can be rebuilt)
|
// Build outputs.
|
||||||
"build": true,
|
"build": true,
|
||||||
"dist": true,
|
"dist": true,
|
||||||
"target": true,
|
"target": true,
|
||||||
@@ -88,25 +83,25 @@ var projectDependencyDirs = map[string]bool{
|
|||||||
".output": true,
|
".output": true,
|
||||||
".parcel-cache": true,
|
".parcel-cache": true,
|
||||||
".turbo": true,
|
".turbo": true,
|
||||||
".vite": true, // Vite cache
|
".vite": true,
|
||||||
".nx": true, // Nx cache
|
".nx": true,
|
||||||
"coverage": true,
|
"coverage": true,
|
||||||
".coverage": true,
|
".coverage": true,
|
||||||
".nyc_output": true, // NYC coverage
|
".nyc_output": true,
|
||||||
|
|
||||||
// Frontend framework outputs
|
// Frontend framework outputs.
|
||||||
".angular": true, // Angular CLI cache
|
".angular": true,
|
||||||
".svelte-kit": true, // SvelteKit build
|
".svelte-kit": true,
|
||||||
".astro": true, // Astro cache
|
".astro": true,
|
||||||
".docusaurus": true, // Docusaurus build
|
".docusaurus": true,
|
||||||
|
|
||||||
// iOS/macOS development
|
// Apple dev.
|
||||||
"DerivedData": true,
|
"DerivedData": true,
|
||||||
"Pods": true,
|
"Pods": true,
|
||||||
".build": true,
|
".build": true,
|
||||||
"Carthage": true,
|
"Carthage": true,
|
||||||
".dart_tool": true,
|
".dart_tool": true,
|
||||||
|
|
||||||
// Other tools
|
// Other tools.
|
||||||
".terraform": true, // Terraform plugins
|
".terraform": true,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,35 +6,35 @@ const (
|
|||||||
maxEntries = 30
|
maxEntries = 30
|
||||||
maxLargeFiles = 30
|
maxLargeFiles = 30
|
||||||
barWidth = 24
|
barWidth = 24
|
||||||
minLargeFileSize = 100 << 20 // 100 MB
|
minLargeFileSize = 100 << 20
|
||||||
defaultViewport = 12 // Default viewport when terminal height is unknown
|
defaultViewport = 12
|
||||||
overviewCacheTTL = 7 * 24 * time.Hour // 7 days
|
overviewCacheTTL = 7 * 24 * time.Hour
|
||||||
overviewCacheFile = "overview_sizes.json"
|
overviewCacheFile = "overview_sizes.json"
|
||||||
duTimeout = 30 * time.Second // Fail faster to fallback to concurrent scan
|
duTimeout = 30 * time.Second
|
||||||
mdlsTimeout = 5 * time.Second
|
mdlsTimeout = 5 * time.Second
|
||||||
maxConcurrentOverview = 8 // Increased parallel overview scans
|
maxConcurrentOverview = 8
|
||||||
batchUpdateSize = 100 // Batch atomic updates every N items
|
batchUpdateSize = 100
|
||||||
cacheModTimeGrace = 30 * time.Minute // Ignore minor directory mtime bumps
|
cacheModTimeGrace = 30 * time.Minute
|
||||||
|
|
||||||
// Worker pool configuration
|
// Worker pool limits.
|
||||||
minWorkers = 16 // Safe baseline for older machines
|
minWorkers = 16
|
||||||
maxWorkers = 64 // Cap at 64 to avoid OS resource contention
|
maxWorkers = 64
|
||||||
cpuMultiplier = 4 // Balanced CPU usage
|
cpuMultiplier = 4
|
||||||
maxDirWorkers = 32 // Limit concurrent subdirectory scans
|
maxDirWorkers = 32
|
||||||
openCommandTimeout = 10 * time.Second // Timeout for open/reveal commands
|
openCommandTimeout = 10 * time.Second
|
||||||
)
|
)
|
||||||
|
|
||||||
var foldDirs = map[string]bool{
|
var foldDirs = map[string]bool{
|
||||||
// Version control
|
// VCS.
|
||||||
".git": true,
|
".git": true,
|
||||||
".svn": true,
|
".svn": true,
|
||||||
".hg": true,
|
".hg": true,
|
||||||
|
|
||||||
// JavaScript/Node
|
// JavaScript/Node.
|
||||||
"node_modules": true,
|
"node_modules": true,
|
||||||
".npm": true,
|
".npm": true,
|
||||||
"_npx": true, // ~/.npm/_npx global cache
|
"_npx": true,
|
||||||
"_cacache": true, // ~/.npm/_cacache
|
"_cacache": true,
|
||||||
"_logs": true,
|
"_logs": true,
|
||||||
"_locks": true,
|
"_locks": true,
|
||||||
"_quick": true,
|
"_quick": true,
|
||||||
@@ -56,7 +56,7 @@ var foldDirs = map[string]bool{
|
|||||||
".bun": true,
|
".bun": true,
|
||||||
".deno": true,
|
".deno": true,
|
||||||
|
|
||||||
// Python
|
// Python.
|
||||||
"__pycache__": true,
|
"__pycache__": true,
|
||||||
".pytest_cache": true,
|
".pytest_cache": true,
|
||||||
".mypy_cache": true,
|
".mypy_cache": true,
|
||||||
@@ -73,7 +73,7 @@ var foldDirs = map[string]bool{
|
|||||||
".pip": true,
|
".pip": true,
|
||||||
".pipx": true,
|
".pipx": true,
|
||||||
|
|
||||||
// Ruby/Go/PHP (vendor), Java/Kotlin/Scala/Rust (target)
|
// Ruby/Go/PHP (vendor), Java/Kotlin/Scala/Rust (target).
|
||||||
"vendor": true,
|
"vendor": true,
|
||||||
".bundle": true,
|
".bundle": true,
|
||||||
"gems": true,
|
"gems": true,
|
||||||
@@ -88,20 +88,20 @@ var foldDirs = map[string]bool{
|
|||||||
".composer": true,
|
".composer": true,
|
||||||
".cargo": true,
|
".cargo": true,
|
||||||
|
|
||||||
// Build outputs
|
// Build outputs.
|
||||||
"build": true,
|
"build": true,
|
||||||
"dist": true,
|
"dist": true,
|
||||||
".output": true,
|
".output": true,
|
||||||
"coverage": true,
|
"coverage": true,
|
||||||
".coverage": true,
|
".coverage": true,
|
||||||
|
|
||||||
// IDE
|
// IDE.
|
||||||
".idea": true,
|
".idea": true,
|
||||||
".vscode": true,
|
".vscode": true,
|
||||||
".vs": true,
|
".vs": true,
|
||||||
".fleet": true,
|
".fleet": true,
|
||||||
|
|
||||||
// Cache directories
|
// Cache directories.
|
||||||
".cache": true,
|
".cache": true,
|
||||||
"__MACOSX": true,
|
"__MACOSX": true,
|
||||||
".DS_Store": true,
|
".DS_Store": true,
|
||||||
@@ -121,18 +121,18 @@ var foldDirs = map[string]bool{
|
|||||||
".sdkman": true,
|
".sdkman": true,
|
||||||
".nvm": true,
|
".nvm": true,
|
||||||
|
|
||||||
// macOS specific
|
// macOS.
|
||||||
"Application Scripts": true,
|
"Application Scripts": true,
|
||||||
"Saved Application State": true,
|
"Saved Application State": true,
|
||||||
|
|
||||||
// iCloud
|
// iCloud.
|
||||||
"Mobile Documents": true,
|
"Mobile Documents": true,
|
||||||
|
|
||||||
// Docker & Containers
|
// Containers.
|
||||||
".docker": true,
|
".docker": true,
|
||||||
".containerd": true,
|
".containerd": true,
|
||||||
|
|
||||||
// Mobile development
|
// Mobile development.
|
||||||
"Pods": true,
|
"Pods": true,
|
||||||
"DerivedData": true,
|
"DerivedData": true,
|
||||||
".build": true,
|
".build": true,
|
||||||
@@ -140,18 +140,18 @@ var foldDirs = map[string]bool{
|
|||||||
"Carthage": true,
|
"Carthage": true,
|
||||||
".dart_tool": true,
|
".dart_tool": true,
|
||||||
|
|
||||||
// Web frameworks
|
// Web frameworks.
|
||||||
".angular": true,
|
".angular": true,
|
||||||
".svelte-kit": true,
|
".svelte-kit": true,
|
||||||
".astro": true,
|
".astro": true,
|
||||||
".solid": true,
|
".solid": true,
|
||||||
|
|
||||||
// Databases
|
// Databases.
|
||||||
".mysql": true,
|
".mysql": true,
|
||||||
".postgres": true,
|
".postgres": true,
|
||||||
"mongodb": true,
|
"mongodb": true,
|
||||||
|
|
||||||
// Other
|
// Other.
|
||||||
".terraform": true,
|
".terraform": true,
|
||||||
".vagrant": true,
|
".vagrant": true,
|
||||||
"tmp": true,
|
"tmp": true,
|
||||||
@@ -170,22 +170,22 @@ var skipSystemDirs = map[string]bool{
|
|||||||
"bin": true,
|
"bin": true,
|
||||||
"etc": true,
|
"etc": true,
|
||||||
"var": true,
|
"var": true,
|
||||||
"opt": false, // User might want to specific check opt
|
"opt": false,
|
||||||
"usr": false, // User might check usr
|
"usr": false,
|
||||||
"Volumes": true, // Skip external drives by default when scanning root
|
"Volumes": true,
|
||||||
"Network": true, // Skip network mounts
|
"Network": true,
|
||||||
".vol": true,
|
".vol": true,
|
||||||
".Spotlight-V100": true,
|
".Spotlight-V100": true,
|
||||||
".fseventsd": true,
|
".fseventsd": true,
|
||||||
".DocumentRevisions-V100": true,
|
".DocumentRevisions-V100": true,
|
||||||
".TemporaryItems": true,
|
".TemporaryItems": true,
|
||||||
".MobileBackups": true, // Time Machine local snapshots
|
".MobileBackups": true,
|
||||||
}
|
}
|
||||||
|
|
||||||
var defaultSkipDirs = map[string]bool{
|
var defaultSkipDirs = map[string]bool{
|
||||||
"nfs": true, // Network File System
|
"nfs": true,
|
||||||
"PHD": true, // Parallels Shared Folders / Home Directories
|
"PHD": true,
|
||||||
"Permissions": true, // Common macOS deny folder
|
"Permissions": true,
|
||||||
}
|
}
|
||||||
|
|
||||||
var skipExtensions = map[string]bool{
|
var skipExtensions = map[string]bool{
|
||||||
|
|||||||
@@ -23,13 +23,13 @@ func deletePathCmd(path string, counter *int64) tea.Cmd {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// deleteMultiplePathsCmd deletes multiple paths and returns combined results
|
// deleteMultiplePathsCmd deletes paths and aggregates results.
|
||||||
func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
var totalCount int64
|
var totalCount int64
|
||||||
var errors []string
|
var errors []string
|
||||||
|
|
||||||
// Delete deeper paths first to avoid parent removal triggering child not-exist errors
|
// Delete deeper paths first to avoid parent/child conflicts.
|
||||||
pathsToDelete := append([]string(nil), paths...)
|
pathsToDelete := append([]string(nil), paths...)
|
||||||
sort.Slice(pathsToDelete, func(i, j int) bool {
|
sort.Slice(pathsToDelete, func(i, j int) bool {
|
||||||
return strings.Count(pathsToDelete[i], string(filepath.Separator)) > strings.Count(pathsToDelete[j], string(filepath.Separator))
|
return strings.Count(pathsToDelete[i], string(filepath.Separator)) > strings.Count(pathsToDelete[j], string(filepath.Separator))
|
||||||
@@ -40,7 +40,7 @@ func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
|||||||
totalCount += count
|
totalCount += count
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if os.IsNotExist(err) {
|
if os.IsNotExist(err) {
|
||||||
continue // Parent already removed - not an actionable error
|
continue
|
||||||
}
|
}
|
||||||
errors = append(errors, err.Error())
|
errors = append(errors, err.Error())
|
||||||
}
|
}
|
||||||
@@ -51,17 +51,16 @@ func deleteMultiplePathsCmd(paths []string, counter *int64) tea.Cmd {
|
|||||||
resultErr = &multiDeleteError{errors: errors}
|
resultErr = &multiDeleteError{errors: errors}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return empty path to trigger full refresh since multiple items were deleted
|
|
||||||
return deleteProgressMsg{
|
return deleteProgressMsg{
|
||||||
done: true,
|
done: true,
|
||||||
err: resultErr,
|
err: resultErr,
|
||||||
count: totalCount,
|
count: totalCount,
|
||||||
path: "", // Empty path signals multiple deletions
|
path: "",
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// multiDeleteError holds multiple deletion errors
|
// multiDeleteError holds multiple deletion errors.
|
||||||
type multiDeleteError struct {
|
type multiDeleteError struct {
|
||||||
errors []string
|
errors []string
|
||||||
}
|
}
|
||||||
@@ -79,14 +78,13 @@ func deletePathWithProgress(root string, counter *int64) (int64, error) {
|
|||||||
|
|
||||||
err := filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error {
|
err := filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Skip permission errors but continue walking
|
// Skip permission errors but continue.
|
||||||
if os.IsPermission(err) {
|
if os.IsPermission(err) {
|
||||||
if firstErr == nil {
|
if firstErr == nil {
|
||||||
firstErr = err
|
firstErr = err
|
||||||
}
|
}
|
||||||
return filepath.SkipDir
|
return filepath.SkipDir
|
||||||
}
|
}
|
||||||
// For other errors, record and continue
|
|
||||||
if firstErr == nil {
|
if firstErr == nil {
|
||||||
firstErr = err
|
firstErr = err
|
||||||
}
|
}
|
||||||
@@ -100,7 +98,6 @@ func deletePathWithProgress(root string, counter *int64) (int64, error) {
|
|||||||
atomic.StoreInt64(counter, count)
|
atomic.StoreInt64(counter, count)
|
||||||
}
|
}
|
||||||
} else if firstErr == nil {
|
} else if firstErr == nil {
|
||||||
// Record first deletion error
|
|
||||||
firstErr = removeErr
|
firstErr = removeErr
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -108,19 +105,15 @@ func deletePathWithProgress(root string, counter *int64) (int64, error) {
|
|||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
// Track walk error separately
|
|
||||||
if err != nil && firstErr == nil {
|
if err != nil && firstErr == nil {
|
||||||
firstErr = err
|
firstErr = err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Try to remove remaining directory structure
|
|
||||||
// Even if this fails, we still report files deleted
|
|
||||||
if removeErr := os.RemoveAll(root); removeErr != nil {
|
if removeErr := os.RemoveAll(root); removeErr != nil {
|
||||||
if firstErr == nil {
|
if firstErr == nil {
|
||||||
firstErr = removeErr
|
firstErr = removeErr
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Always return count (even if there were errors), along with first error
|
|
||||||
return count, firstErr
|
return count, firstErr
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,9 +11,7 @@ func TestDeleteMultiplePathsCmdHandlesParentChild(t *testing.T) {
|
|||||||
parent := filepath.Join(base, "parent")
|
parent := filepath.Join(base, "parent")
|
||||||
child := filepath.Join(parent, "child")
|
child := filepath.Join(parent, "child")
|
||||||
|
|
||||||
// Create structure:
|
// Structure: parent/fileA, parent/child/fileC.
|
||||||
// parent/fileA
|
|
||||||
// parent/child/fileC
|
|
||||||
if err := os.MkdirAll(child, 0o755); err != nil {
|
if err := os.MkdirAll(child, 0o755); err != nil {
|
||||||
t.Fatalf("mkdir: %v", err)
|
t.Fatalf("mkdir: %v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ func displayPath(path string) string {
|
|||||||
return path
|
return path
|
||||||
}
|
}
|
||||||
|
|
||||||
// truncateMiddle truncates string in the middle, keeping head and tail.
|
// truncateMiddle trims the middle, keeping head and tail.
|
||||||
func truncateMiddle(s string, maxWidth int) string {
|
func truncateMiddle(s string, maxWidth int) string {
|
||||||
runes := []rune(s)
|
runes := []rune(s)
|
||||||
currentWidth := displayWidth(s)
|
currentWidth := displayWidth(s)
|
||||||
@@ -27,9 +27,7 @@ func truncateMiddle(s string, maxWidth int) string {
|
|||||||
return s
|
return s
|
||||||
}
|
}
|
||||||
|
|
||||||
// Reserve 3 width for "..."
|
|
||||||
if maxWidth < 10 {
|
if maxWidth < 10 {
|
||||||
// Simple truncation for very small width
|
|
||||||
width := 0
|
width := 0
|
||||||
for i, r := range runes {
|
for i, r := range runes {
|
||||||
width += runeWidth(r)
|
width += runeWidth(r)
|
||||||
@@ -40,11 +38,9 @@ func truncateMiddle(s string, maxWidth int) string {
|
|||||||
return s
|
return s
|
||||||
}
|
}
|
||||||
|
|
||||||
// Keep more of the tail (filename usually more important)
|
|
||||||
targetHeadWidth := (maxWidth - 3) / 3
|
targetHeadWidth := (maxWidth - 3) / 3
|
||||||
targetTailWidth := maxWidth - 3 - targetHeadWidth
|
targetTailWidth := maxWidth - 3 - targetHeadWidth
|
||||||
|
|
||||||
// Find head cutoff point based on display width
|
|
||||||
headWidth := 0
|
headWidth := 0
|
||||||
headIdx := 0
|
headIdx := 0
|
||||||
for i, r := range runes {
|
for i, r := range runes {
|
||||||
@@ -56,7 +52,6 @@ func truncateMiddle(s string, maxWidth int) string {
|
|||||||
headIdx = i + 1
|
headIdx = i + 1
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find tail cutoff point
|
|
||||||
tailWidth := 0
|
tailWidth := 0
|
||||||
tailIdx := len(runes)
|
tailIdx := len(runes)
|
||||||
for i := len(runes) - 1; i >= 0; i-- {
|
for i := len(runes) - 1; i >= 0; i-- {
|
||||||
@@ -108,7 +103,6 @@ func coloredProgressBar(value, max int64, percent float64) string {
|
|||||||
filled = barWidth
|
filled = barWidth
|
||||||
}
|
}
|
||||||
|
|
||||||
// Choose color based on percentage
|
|
||||||
var barColor string
|
var barColor string
|
||||||
if percent >= 50 {
|
if percent >= 50 {
|
||||||
barColor = colorRed
|
barColor = colorRed
|
||||||
@@ -142,7 +136,7 @@ func coloredProgressBar(value, max int64, percent float64) string {
|
|||||||
return bar + colorReset
|
return bar + colorReset
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate display width considering CJK characters and Emoji.
|
// runeWidth returns display width for wide characters and emoji.
|
||||||
func runeWidth(r rune) int {
|
func runeWidth(r rune) int {
|
||||||
if r >= 0x4E00 && r <= 0x9FFF || // CJK Unified Ideographs
|
if r >= 0x4E00 && r <= 0x9FFF || // CJK Unified Ideographs
|
||||||
r >= 0x3400 && r <= 0x4DBF || // CJK Extension A
|
r >= 0x3400 && r <= 0x4DBF || // CJK Extension A
|
||||||
@@ -173,18 +167,16 @@ func displayWidth(s string) int {
|
|||||||
return width
|
return width
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateNameWidth computes the optimal name column width based on terminal width.
|
// calculateNameWidth computes name column width from terminal width.
|
||||||
// Fixed elements: prefix(3) + num(3) + bar(24) + percent(7) + sep(5) + icon(3) + size(12) + hint(4) = 61
|
|
||||||
func calculateNameWidth(termWidth int) int {
|
func calculateNameWidth(termWidth int) int {
|
||||||
const fixedWidth = 61
|
const fixedWidth = 61
|
||||||
available := termWidth - fixedWidth
|
available := termWidth - fixedWidth
|
||||||
|
|
||||||
// Constrain to reasonable bounds
|
|
||||||
if available < 24 {
|
if available < 24 {
|
||||||
return 24 // Minimum for readability
|
return 24
|
||||||
}
|
}
|
||||||
if available > 60 {
|
if available > 60 {
|
||||||
return 60 // Maximum to avoid overly wide columns
|
return 60
|
||||||
}
|
}
|
||||||
return available
|
return available
|
||||||
}
|
}
|
||||||
@@ -233,7 +225,7 @@ func padName(name string, targetWidth int) string {
|
|||||||
return name + strings.Repeat(" ", targetWidth-currentWidth)
|
return name + strings.Repeat(" ", targetWidth-currentWidth)
|
||||||
}
|
}
|
||||||
|
|
||||||
// formatUnusedTime formats the time since last access in a compact way.
|
// formatUnusedTime formats time since last access.
|
||||||
func formatUnusedTime(lastAccess time.Time) string {
|
func formatUnusedTime(lastAccess time.Time) string {
|
||||||
if lastAccess.IsZero() {
|
if lastAccess.IsZero() {
|
||||||
return ""
|
return ""
|
||||||
|
|||||||
@@ -168,7 +168,6 @@ func TestTruncateMiddle(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestDisplayPath(t *testing.T) {
|
func TestDisplayPath(t *testing.T) {
|
||||||
// This test assumes HOME is set
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
setup func() string
|
setup func() string
|
||||||
|
|||||||
@@ -1,15 +1,10 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
// entryHeap implements heap.Interface for a min-heap of dirEntry (sorted by Size)
|
// entryHeap is a min-heap of dirEntry used to keep Top N largest entries.
|
||||||
// Since we want Top N Largest, we use a Min Heap of size N.
|
|
||||||
// When adding a new item:
|
|
||||||
// 1. If heap size < N: push
|
|
||||||
// 2. If heap size == N and item > min (root): pop min, push item
|
|
||||||
// The heap will thus maintain the largest N items.
|
|
||||||
type entryHeap []dirEntry
|
type entryHeap []dirEntry
|
||||||
|
|
||||||
func (h entryHeap) Len() int { return len(h) }
|
func (h entryHeap) Len() int { return len(h) }
|
||||||
func (h entryHeap) Less(i, j int) bool { return h[i].Size < h[j].Size } // Min-heap based on Size
|
func (h entryHeap) Less(i, j int) bool { return h[i].Size < h[j].Size }
|
||||||
func (h entryHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
|
func (h entryHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
|
||||||
|
|
||||||
func (h *entryHeap) Push(x interface{}) {
|
func (h *entryHeap) Push(x interface{}) {
|
||||||
@@ -24,7 +19,7 @@ func (h *entryHeap) Pop() interface{} {
|
|||||||
return x
|
return x
|
||||||
}
|
}
|
||||||
|
|
||||||
// largeFileHeap implements heap.Interface for fileEntry
|
// largeFileHeap is a min-heap for fileEntry.
|
||||||
type largeFileHeap []fileEntry
|
type largeFileHeap []fileEntry
|
||||||
|
|
||||||
func (h largeFileHeap) Len() int { return len(h) }
|
func (h largeFileHeap) Len() int { return len(h) }
|
||||||
|
|||||||
@@ -130,7 +130,6 @@ func main() {
|
|||||||
var isOverview bool
|
var isOverview bool
|
||||||
|
|
||||||
if target == "" {
|
if target == "" {
|
||||||
// Default to overview mode
|
|
||||||
isOverview = true
|
isOverview = true
|
||||||
abs = "/"
|
abs = "/"
|
||||||
} else {
|
} else {
|
||||||
@@ -143,8 +142,7 @@ func main() {
|
|||||||
isOverview = false
|
isOverview = false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prefetch overview cache in background (non-blocking)
|
// Warm overview cache in background.
|
||||||
// Use context with timeout to prevent hanging
|
|
||||||
prefetchCtx, prefetchCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
prefetchCtx, prefetchCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
defer prefetchCancel()
|
defer prefetchCancel()
|
||||||
go prefetchOverviewCache(prefetchCtx)
|
go prefetchOverviewCache(prefetchCtx)
|
||||||
@@ -184,7 +182,6 @@ func newModel(path string, isOverview bool) model {
|
|||||||
largeMultiSelected: make(map[string]bool),
|
largeMultiSelected: make(map[string]bool),
|
||||||
}
|
}
|
||||||
|
|
||||||
// In overview mode, create shortcut entries
|
|
||||||
if isOverview {
|
if isOverview {
|
||||||
m.scanning = false
|
m.scanning = false
|
||||||
m.hydrateOverviewEntries()
|
m.hydrateOverviewEntries()
|
||||||
@@ -205,12 +202,10 @@ func createOverviewEntries() []dirEntry {
|
|||||||
home := os.Getenv("HOME")
|
home := os.Getenv("HOME")
|
||||||
entries := []dirEntry{}
|
entries := []dirEntry{}
|
||||||
|
|
||||||
// Separate Home and ~/Library for better visibility and performance
|
// Separate Home and ~/Library to avoid double counting.
|
||||||
// Home excludes Library to avoid duplicate scanning
|
|
||||||
if home != "" {
|
if home != "" {
|
||||||
entries = append(entries, dirEntry{Name: "Home", Path: home, IsDir: true, Size: -1})
|
entries = append(entries, dirEntry{Name: "Home", Path: home, IsDir: true, Size: -1})
|
||||||
|
|
||||||
// Add ~/Library separately so users can see app data usage
|
|
||||||
userLibrary := filepath.Join(home, "Library")
|
userLibrary := filepath.Join(home, "Library")
|
||||||
if _, err := os.Stat(userLibrary); err == nil {
|
if _, err := os.Stat(userLibrary); err == nil {
|
||||||
entries = append(entries, dirEntry{Name: "App Library", Path: userLibrary, IsDir: true, Size: -1})
|
entries = append(entries, dirEntry{Name: "App Library", Path: userLibrary, IsDir: true, Size: -1})
|
||||||
@@ -222,7 +217,7 @@ func createOverviewEntries() []dirEntry {
|
|||||||
dirEntry{Name: "System Library", Path: "/Library", IsDir: true, Size: -1},
|
dirEntry{Name: "System Library", Path: "/Library", IsDir: true, Size: -1},
|
||||||
)
|
)
|
||||||
|
|
||||||
// Add Volumes shortcut only when it contains real mounted folders (e.g., external disks)
|
// Include Volumes only when real mounts exist.
|
||||||
if hasUsefulVolumeMounts("/Volumes") {
|
if hasUsefulVolumeMounts("/Volumes") {
|
||||||
entries = append(entries, dirEntry{Name: "Volumes", Path: "/Volumes", IsDir: true, Size: -1})
|
entries = append(entries, dirEntry{Name: "Volumes", Path: "/Volumes", IsDir: true, Size: -1})
|
||||||
}
|
}
|
||||||
@@ -238,7 +233,6 @@ func hasUsefulVolumeMounts(path string) bool {
|
|||||||
|
|
||||||
for _, entry := range entries {
|
for _, entry := range entries {
|
||||||
name := entry.Name()
|
name := entry.Name()
|
||||||
// Skip hidden control entries for Spotlight/TimeMachine etc.
|
|
||||||
if strings.HasPrefix(name, ".") {
|
if strings.HasPrefix(name, ".") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -276,8 +270,7 @@ func (m *model) hydrateOverviewEntries() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *model) sortOverviewEntriesBySize() {
|
func (m *model) sortOverviewEntriesBySize() {
|
||||||
// Sort entries by size (largest first)
|
// Stable sort by size.
|
||||||
// Use stable sort to maintain order when sizes are equal
|
|
||||||
sort.SliceStable(m.entries, func(i, j int) bool {
|
sort.SliceStable(m.entries, func(i, j int) bool {
|
||||||
return m.entries[i].Size > m.entries[j].Size
|
return m.entries[i].Size > m.entries[j].Size
|
||||||
})
|
})
|
||||||
@@ -288,7 +281,6 @@ func (m *model) scheduleOverviewScans() tea.Cmd {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find pending entries (not scanned and not currently scanning)
|
|
||||||
var pendingIndices []int
|
var pendingIndices []int
|
||||||
for i, entry := range m.entries {
|
for i, entry := range m.entries {
|
||||||
if entry.Size < 0 && !m.overviewScanningSet[entry.Path] {
|
if entry.Size < 0 && !m.overviewScanningSet[entry.Path] {
|
||||||
@@ -299,18 +291,15 @@ func (m *model) scheduleOverviewScans() tea.Cmd {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// No more work to do
|
|
||||||
if len(pendingIndices) == 0 {
|
if len(pendingIndices) == 0 {
|
||||||
m.overviewScanning = false
|
m.overviewScanning = false
|
||||||
if !hasPendingOverviewEntries(m.entries) {
|
if !hasPendingOverviewEntries(m.entries) {
|
||||||
// All scans complete - sort entries by size (largest first)
|
|
||||||
m.sortOverviewEntriesBySize()
|
m.sortOverviewEntriesBySize()
|
||||||
m.status = "Ready"
|
m.status = "Ready"
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mark all as scanning
|
|
||||||
var cmds []tea.Cmd
|
var cmds []tea.Cmd
|
||||||
for _, idx := range pendingIndices {
|
for _, idx := range pendingIndices {
|
||||||
entry := m.entries[idx]
|
entry := m.entries[idx]
|
||||||
@@ -361,7 +350,6 @@ func (m model) Init() tea.Cmd {
|
|||||||
|
|
||||||
func (m model) scanCmd(path string) tea.Cmd {
|
func (m model) scanCmd(path string) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
// Try to load from persistent cache first
|
|
||||||
if cached, err := loadCacheFromDisk(path); err == nil {
|
if cached, err := loadCacheFromDisk(path); err == nil {
|
||||||
result := scanResult{
|
result := scanResult{
|
||||||
Entries: cached.Entries,
|
Entries: cached.Entries,
|
||||||
@@ -371,8 +359,6 @@ func (m model) scanCmd(path string) tea.Cmd {
|
|||||||
return scanResultMsg{result: result, err: nil}
|
return scanResultMsg{result: result, err: nil}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use singleflight to avoid duplicate scans of the same path
|
|
||||||
// If multiple goroutines request the same path, only one scan will be performed
|
|
||||||
v, err, _ := scanGroup.Do(path, func() (interface{}, error) {
|
v, err, _ := scanGroup.Do(path, func() (interface{}, error) {
|
||||||
return scanPathConcurrent(path, m.filesScanned, m.dirsScanned, m.bytesScanned, m.currentPath)
|
return scanPathConcurrent(path, m.filesScanned, m.dirsScanned, m.bytesScanned, m.currentPath)
|
||||||
})
|
})
|
||||||
@@ -383,10 +369,8 @@ func (m model) scanCmd(path string) tea.Cmd {
|
|||||||
|
|
||||||
result := v.(scanResult)
|
result := v.(scanResult)
|
||||||
|
|
||||||
// Save to persistent cache asynchronously with error logging
|
|
||||||
go func(p string, r scanResult) {
|
go func(p string, r scanResult) {
|
||||||
if err := saveCacheToDisk(p, r); err != nil {
|
if err := saveCacheToDisk(p, r); err != nil {
|
||||||
// Log error but don't fail the scan
|
|
||||||
_ = err // Cache save failure is not critical
|
_ = err // Cache save failure is not critical
|
||||||
}
|
}
|
||||||
}(path, result)
|
}(path, result)
|
||||||
@@ -412,7 +396,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
case deleteProgressMsg:
|
case deleteProgressMsg:
|
||||||
if msg.done {
|
if msg.done {
|
||||||
m.deleting = false
|
m.deleting = false
|
||||||
// Clear multi-selection after delete
|
|
||||||
m.multiSelected = make(map[string]bool)
|
m.multiSelected = make(map[string]bool)
|
||||||
m.largeMultiSelected = make(map[string]bool)
|
m.largeMultiSelected = make(map[string]bool)
|
||||||
if msg.err != nil {
|
if msg.err != nil {
|
||||||
@@ -424,7 +407,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
invalidateCache(m.path)
|
invalidateCache(m.path)
|
||||||
m.status = fmt.Sprintf("Deleted %d items", msg.count)
|
m.status = fmt.Sprintf("Deleted %d items", msg.count)
|
||||||
// Mark all caches as dirty
|
|
||||||
for i := range m.history {
|
for i := range m.history {
|
||||||
m.history[i].Dirty = true
|
m.history[i].Dirty = true
|
||||||
}
|
}
|
||||||
@@ -433,9 +415,7 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
entry.Dirty = true
|
entry.Dirty = true
|
||||||
m.cache[path] = entry
|
m.cache[path] = entry
|
||||||
}
|
}
|
||||||
// Refresh the view
|
|
||||||
m.scanning = true
|
m.scanning = true
|
||||||
// Reset scan counters for rescan
|
|
||||||
atomic.StoreInt64(m.filesScanned, 0)
|
atomic.StoreInt64(m.filesScanned, 0)
|
||||||
atomic.StoreInt64(m.dirsScanned, 0)
|
atomic.StoreInt64(m.dirsScanned, 0)
|
||||||
atomic.StoreInt64(m.bytesScanned, 0)
|
atomic.StoreInt64(m.bytesScanned, 0)
|
||||||
@@ -452,7 +432,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.status = fmt.Sprintf("Scan failed: %v", msg.err)
|
m.status = fmt.Sprintf("Scan failed: %v", msg.err)
|
||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
// Filter out 0-byte items for cleaner view
|
|
||||||
filteredEntries := make([]dirEntry, 0, len(msg.result.Entries))
|
filteredEntries := make([]dirEntry, 0, len(msg.result.Entries))
|
||||||
for _, e := range msg.result.Entries {
|
for _, e := range msg.result.Entries {
|
||||||
if e.Size > 0 {
|
if e.Size > 0 {
|
||||||
@@ -477,7 +456,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
case overviewSizeMsg:
|
case overviewSizeMsg:
|
||||||
// Remove from scanning set
|
|
||||||
delete(m.overviewScanningSet, msg.Path)
|
delete(m.overviewScanningSet, msg.Path)
|
||||||
|
|
||||||
if msg.Err == nil {
|
if msg.Err == nil {
|
||||||
@@ -488,7 +466,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if m.inOverviewMode() {
|
if m.inOverviewMode() {
|
||||||
// Update entry with result
|
|
||||||
for i := range m.entries {
|
for i := range m.entries {
|
||||||
if m.entries[i].Path == msg.Path {
|
if m.entries[i].Path == msg.Path {
|
||||||
if msg.Err == nil {
|
if msg.Err == nil {
|
||||||
@@ -501,18 +478,15 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
m.totalSize = sumKnownEntrySizes(m.entries)
|
m.totalSize = sumKnownEntrySizes(m.entries)
|
||||||
|
|
||||||
// Show error briefly if any
|
|
||||||
if msg.Err != nil {
|
if msg.Err != nil {
|
||||||
m.status = fmt.Sprintf("Unable to measure %s: %v", displayPath(msg.Path), msg.Err)
|
m.status = fmt.Sprintf("Unable to measure %s: %v", displayPath(msg.Path), msg.Err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Schedule next batch of scans
|
|
||||||
cmd := m.scheduleOverviewScans()
|
cmd := m.scheduleOverviewScans()
|
||||||
return m, cmd
|
return m, cmd
|
||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
case tickMsg:
|
case tickMsg:
|
||||||
// Keep spinner running if scanning or deleting or if there are pending overview items
|
|
||||||
hasPending := false
|
hasPending := false
|
||||||
if m.inOverviewMode() {
|
if m.inOverviewMode() {
|
||||||
for _, entry := range m.entries {
|
for _, entry := range m.entries {
|
||||||
@@ -524,7 +498,6 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
if m.scanning || m.deleting || (m.inOverviewMode() && (m.overviewScanning || hasPending)) {
|
if m.scanning || m.deleting || (m.inOverviewMode() && (m.overviewScanning || hasPending)) {
|
||||||
m.spinner = (m.spinner + 1) % len(spinnerFrames)
|
m.spinner = (m.spinner + 1) % len(spinnerFrames)
|
||||||
// Update delete progress status
|
|
||||||
if m.deleting && m.deleteCount != nil {
|
if m.deleting && m.deleteCount != nil {
|
||||||
count := atomic.LoadInt64(m.deleteCount)
|
count := atomic.LoadInt64(m.deleteCount)
|
||||||
if count > 0 {
|
if count > 0 {
|
||||||
@@ -540,18 +513,16 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
||||||
// Handle delete confirmation
|
// Delete confirm flow.
|
||||||
if m.deleteConfirm {
|
if m.deleteConfirm {
|
||||||
switch msg.String() {
|
switch msg.String() {
|
||||||
case "delete", "backspace":
|
case "delete", "backspace":
|
||||||
// Confirm delete - start async deletion
|
|
||||||
m.deleteConfirm = false
|
m.deleteConfirm = false
|
||||||
m.deleting = true
|
m.deleting = true
|
||||||
var deleteCount int64
|
var deleteCount int64
|
||||||
m.deleteCount = &deleteCount
|
m.deleteCount = &deleteCount
|
||||||
|
|
||||||
// Collect paths to delete (multi-select or single)
|
// Collect paths (safer than indices).
|
||||||
// Using paths instead of indices is safer - avoids deleting wrong files if list changes
|
|
||||||
var pathsToDelete []string
|
var pathsToDelete []string
|
||||||
if m.showLargeFiles {
|
if m.showLargeFiles {
|
||||||
if len(m.largeMultiSelected) > 0 {
|
if len(m.largeMultiSelected) > 0 {
|
||||||
@@ -587,13 +558,11 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
m.status = fmt.Sprintf("Deleting %d items...", len(pathsToDelete))
|
m.status = fmt.Sprintf("Deleting %d items...", len(pathsToDelete))
|
||||||
return m, tea.Batch(deleteMultiplePathsCmd(pathsToDelete, m.deleteCount), tickCmd())
|
return m, tea.Batch(deleteMultiplePathsCmd(pathsToDelete, m.deleteCount), tickCmd())
|
||||||
case "esc", "q":
|
case "esc", "q":
|
||||||
// Cancel delete with ESC or Q
|
|
||||||
m.status = "Cancelled"
|
m.status = "Cancelled"
|
||||||
m.deleteConfirm = false
|
m.deleteConfirm = false
|
||||||
m.deleteTarget = nil
|
m.deleteTarget = nil
|
||||||
return m, nil
|
return m, nil
|
||||||
default:
|
default:
|
||||||
// Ignore other keys - keep showing confirmation
|
|
||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -648,7 +617,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
if len(m.history) == 0 {
|
if len(m.history) == 0 {
|
||||||
// Return to overview if at top level
|
|
||||||
if !m.inOverviewMode() {
|
if !m.inOverviewMode() {
|
||||||
return m, m.switchToOverviewMode()
|
return m, m.switchToOverviewMode()
|
||||||
}
|
}
|
||||||
@@ -663,7 +631,7 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
m.largeOffset = last.LargeOffset
|
m.largeOffset = last.LargeOffset
|
||||||
m.isOverview = last.IsOverview
|
m.isOverview = last.IsOverview
|
||||||
if last.Dirty {
|
if last.Dirty {
|
||||||
// If returning to overview mode, refresh overview entries instead of scanning
|
// On overview return, refresh cached entries.
|
||||||
if last.IsOverview {
|
if last.IsOverview {
|
||||||
m.hydrateOverviewEntries()
|
m.hydrateOverviewEntries()
|
||||||
m.totalSize = sumKnownEntrySizes(m.entries)
|
m.totalSize = sumKnownEntrySizes(m.entries)
|
||||||
@@ -696,17 +664,14 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
m.scanning = false
|
m.scanning = false
|
||||||
return m, nil
|
return m, nil
|
||||||
case "r":
|
case "r":
|
||||||
// Clear multi-selection on refresh
|
|
||||||
m.multiSelected = make(map[string]bool)
|
m.multiSelected = make(map[string]bool)
|
||||||
m.largeMultiSelected = make(map[string]bool)
|
m.largeMultiSelected = make(map[string]bool)
|
||||||
|
|
||||||
if m.inOverviewMode() {
|
if m.inOverviewMode() {
|
||||||
// In overview mode, clear cache and re-scan known entries
|
|
||||||
m.overviewSizeCache = make(map[string]int64)
|
m.overviewSizeCache = make(map[string]int64)
|
||||||
m.overviewScanningSet = make(map[string]bool)
|
m.overviewScanningSet = make(map[string]bool)
|
||||||
m.hydrateOverviewEntries() // Reset sizes to pending
|
m.hydrateOverviewEntries() // Reset sizes to pending
|
||||||
|
|
||||||
// Reset all entries to pending state for visual feedback
|
|
||||||
for i := range m.entries {
|
for i := range m.entries {
|
||||||
m.entries[i].Size = -1
|
m.entries[i].Size = -1
|
||||||
}
|
}
|
||||||
@@ -717,11 +682,9 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
return m, tea.Batch(m.scheduleOverviewScans(), tickCmd())
|
return m, tea.Batch(m.scheduleOverviewScans(), tickCmd())
|
||||||
}
|
}
|
||||||
|
|
||||||
// Normal mode: Invalidate cache before rescanning
|
|
||||||
invalidateCache(m.path)
|
invalidateCache(m.path)
|
||||||
m.status = "Refreshing..."
|
m.status = "Refreshing..."
|
||||||
m.scanning = true
|
m.scanning = true
|
||||||
// Reset scan counters for refresh
|
|
||||||
atomic.StoreInt64(m.filesScanned, 0)
|
atomic.StoreInt64(m.filesScanned, 0)
|
||||||
atomic.StoreInt64(m.dirsScanned, 0)
|
atomic.StoreInt64(m.dirsScanned, 0)
|
||||||
atomic.StoreInt64(m.bytesScanned, 0)
|
atomic.StoreInt64(m.bytesScanned, 0)
|
||||||
@@ -730,7 +693,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
return m, tea.Batch(m.scanCmd(m.path), tickCmd())
|
return m, tea.Batch(m.scanCmd(m.path), tickCmd())
|
||||||
case "t", "T":
|
case "t", "T":
|
||||||
// Don't allow switching to large files view in overview mode
|
|
||||||
if !m.inOverviewMode() {
|
if !m.inOverviewMode() {
|
||||||
m.showLargeFiles = !m.showLargeFiles
|
m.showLargeFiles = !m.showLargeFiles
|
||||||
if m.showLargeFiles {
|
if m.showLargeFiles {
|
||||||
@@ -740,16 +702,13 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
} else {
|
} else {
|
||||||
m.multiSelected = make(map[string]bool)
|
m.multiSelected = make(map[string]bool)
|
||||||
}
|
}
|
||||||
// Reset status when switching views
|
|
||||||
m.status = fmt.Sprintf("Scanned %s", humanizeBytes(m.totalSize))
|
m.status = fmt.Sprintf("Scanned %s", humanizeBytes(m.totalSize))
|
||||||
}
|
}
|
||||||
case "o":
|
case "o":
|
||||||
// Open selected entries (multi-select aware)
|
// Open selected entries (multi-select aware).
|
||||||
// Limit batch operations to prevent system resource exhaustion
|
|
||||||
const maxBatchOpen = 20
|
const maxBatchOpen = 20
|
||||||
if m.showLargeFiles {
|
if m.showLargeFiles {
|
||||||
if len(m.largeFiles) > 0 {
|
if len(m.largeFiles) > 0 {
|
||||||
// Check for multi-selection first
|
|
||||||
if len(m.largeMultiSelected) > 0 {
|
if len(m.largeMultiSelected) > 0 {
|
||||||
count := len(m.largeMultiSelected)
|
count := len(m.largeMultiSelected)
|
||||||
if count > maxBatchOpen {
|
if count > maxBatchOpen {
|
||||||
@@ -775,7 +734,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else if len(m.entries) > 0 {
|
} else if len(m.entries) > 0 {
|
||||||
// Check for multi-selection first
|
|
||||||
if len(m.multiSelected) > 0 {
|
if len(m.multiSelected) > 0 {
|
||||||
count := len(m.multiSelected)
|
count := len(m.multiSelected)
|
||||||
if count > maxBatchOpen {
|
if count > maxBatchOpen {
|
||||||
@@ -801,12 +759,10 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
case "f", "F":
|
case "f", "F":
|
||||||
// Reveal selected entries in Finder (multi-select aware)
|
// Reveal in Finder (multi-select aware).
|
||||||
// Limit batch operations to prevent system resource exhaustion
|
|
||||||
const maxBatchReveal = 20
|
const maxBatchReveal = 20
|
||||||
if m.showLargeFiles {
|
if m.showLargeFiles {
|
||||||
if len(m.largeFiles) > 0 {
|
if len(m.largeFiles) > 0 {
|
||||||
// Check for multi-selection first
|
|
||||||
if len(m.largeMultiSelected) > 0 {
|
if len(m.largeMultiSelected) > 0 {
|
||||||
count := len(m.largeMultiSelected)
|
count := len(m.largeMultiSelected)
|
||||||
if count > maxBatchReveal {
|
if count > maxBatchReveal {
|
||||||
@@ -832,7 +788,6 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else if len(m.entries) > 0 {
|
} else if len(m.entries) > 0 {
|
||||||
// Check for multi-selection first
|
|
||||||
if len(m.multiSelected) > 0 {
|
if len(m.multiSelected) > 0 {
|
||||||
count := len(m.multiSelected)
|
count := len(m.multiSelected)
|
||||||
if count > maxBatchReveal {
|
if count > maxBatchReveal {
|
||||||
@@ -858,8 +813,7 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
case " ":
|
case " ":
|
||||||
// Toggle multi-select with spacebar
|
// Toggle multi-select (paths as keys).
|
||||||
// Using paths as keys (instead of indices) is safer and more maintainable
|
|
||||||
if m.showLargeFiles {
|
if m.showLargeFiles {
|
||||||
if len(m.largeFiles) > 0 && m.largeSelected < len(m.largeFiles) {
|
if len(m.largeFiles) > 0 && m.largeSelected < len(m.largeFiles) {
|
||||||
if m.largeMultiSelected == nil {
|
if m.largeMultiSelected == nil {
|
||||||
@@ -871,11 +825,9 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
} else {
|
} else {
|
||||||
m.largeMultiSelected[selectedPath] = true
|
m.largeMultiSelected[selectedPath] = true
|
||||||
}
|
}
|
||||||
// Update status to show selection count and total size
|
|
||||||
count := len(m.largeMultiSelected)
|
count := len(m.largeMultiSelected)
|
||||||
if count > 0 {
|
if count > 0 {
|
||||||
var totalSize int64
|
var totalSize int64
|
||||||
// Calculate total size by looking up each selected path
|
|
||||||
for path := range m.largeMultiSelected {
|
for path := range m.largeMultiSelected {
|
||||||
for _, file := range m.largeFiles {
|
for _, file := range m.largeFiles {
|
||||||
if file.Path == path {
|
if file.Path == path {
|
||||||
@@ -899,11 +851,9 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
} else {
|
} else {
|
||||||
m.multiSelected[selectedPath] = true
|
m.multiSelected[selectedPath] = true
|
||||||
}
|
}
|
||||||
// Update status to show selection count and total size
|
|
||||||
count := len(m.multiSelected)
|
count := len(m.multiSelected)
|
||||||
if count > 0 {
|
if count > 0 {
|
||||||
var totalSize int64
|
var totalSize int64
|
||||||
// Calculate total size by looking up each selected path
|
|
||||||
for path := range m.multiSelected {
|
for path := range m.multiSelected {
|
||||||
for _, entry := range m.entries {
|
for _, entry := range m.entries {
|
||||||
if entry.Path == path {
|
if entry.Path == path {
|
||||||
@@ -918,15 +868,11 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
case "delete", "backspace":
|
case "delete", "backspace":
|
||||||
// Delete selected file(s) or directory(ies)
|
|
||||||
if m.showLargeFiles {
|
if m.showLargeFiles {
|
||||||
if len(m.largeFiles) > 0 {
|
if len(m.largeFiles) > 0 {
|
||||||
// Check for multi-selection first
|
|
||||||
if len(m.largeMultiSelected) > 0 {
|
if len(m.largeMultiSelected) > 0 {
|
||||||
m.deleteConfirm = true
|
m.deleteConfirm = true
|
||||||
// Set deleteTarget to first selected for display purposes
|
|
||||||
for path := range m.largeMultiSelected {
|
for path := range m.largeMultiSelected {
|
||||||
// Find the file entry by path
|
|
||||||
for _, file := range m.largeFiles {
|
for _, file := range m.largeFiles {
|
||||||
if file.Path == path {
|
if file.Path == path {
|
||||||
m.deleteTarget = &dirEntry{
|
m.deleteTarget = &dirEntry{
|
||||||
@@ -952,12 +898,10 @@ func (m model) updateKey(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else if len(m.entries) > 0 && !m.inOverviewMode() {
|
} else if len(m.entries) > 0 && !m.inOverviewMode() {
|
||||||
// Check for multi-selection first
|
|
||||||
if len(m.multiSelected) > 0 {
|
if len(m.multiSelected) > 0 {
|
||||||
m.deleteConfirm = true
|
m.deleteConfirm = true
|
||||||
// Set deleteTarget to first selected for display purposes
|
|
||||||
for path := range m.multiSelected {
|
for path := range m.multiSelected {
|
||||||
// Find the entry by path
|
// Resolve entry by path.
|
||||||
for i := range m.entries {
|
for i := range m.entries {
|
||||||
if m.entries[i].Path == path {
|
if m.entries[i].Path == path {
|
||||||
m.deleteTarget = &m.entries[i]
|
m.deleteTarget = &m.entries[i]
|
||||||
@@ -994,7 +938,6 @@ func (m *model) switchToOverviewMode() tea.Cmd {
|
|||||||
m.status = "Ready"
|
m.status = "Ready"
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
// Start tick to animate spinner while scanning
|
|
||||||
return tea.Batch(cmd, tickCmd())
|
return tea.Batch(cmd, tickCmd())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1004,7 +947,6 @@ func (m model) enterSelectedDir() (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
selected := m.entries[m.selected]
|
selected := m.entries[m.selected]
|
||||||
if selected.IsDir {
|
if selected.IsDir {
|
||||||
// Always save current state to history (including overview mode)
|
|
||||||
m.history = append(m.history, snapshotFromModel(m))
|
m.history = append(m.history, snapshotFromModel(m))
|
||||||
m.path = selected.Path
|
m.path = selected.Path
|
||||||
m.selected = 0
|
m.selected = 0
|
||||||
@@ -1012,11 +954,9 @@ func (m model) enterSelectedDir() (tea.Model, tea.Cmd) {
|
|||||||
m.status = "Scanning..."
|
m.status = "Scanning..."
|
||||||
m.scanning = true
|
m.scanning = true
|
||||||
m.isOverview = false
|
m.isOverview = false
|
||||||
// Clear multi-selection when entering new directory
|
|
||||||
m.multiSelected = make(map[string]bool)
|
m.multiSelected = make(map[string]bool)
|
||||||
m.largeMultiSelected = make(map[string]bool)
|
m.largeMultiSelected = make(map[string]bool)
|
||||||
|
|
||||||
// Reset scan counters for new scan
|
|
||||||
atomic.StoreInt64(m.filesScanned, 0)
|
atomic.StoreInt64(m.filesScanned, 0)
|
||||||
atomic.StoreInt64(m.dirsScanned, 0)
|
atomic.StoreInt64(m.dirsScanned, 0)
|
||||||
atomic.StoreInt64(m.bytesScanned, 0)
|
atomic.StoreInt64(m.bytesScanned, 0)
|
||||||
|
|||||||
@@ -31,16 +31,14 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
|
|
||||||
var total int64
|
var total int64
|
||||||
|
|
||||||
// Use heaps to track Top N items, drastically reducing memory usage
|
// Keep Top N heaps.
|
||||||
// for directories with millions of files
|
|
||||||
entriesHeap := &entryHeap{}
|
entriesHeap := &entryHeap{}
|
||||||
heap.Init(entriesHeap)
|
heap.Init(entriesHeap)
|
||||||
|
|
||||||
largeFilesHeap := &largeFileHeap{}
|
largeFilesHeap := &largeFileHeap{}
|
||||||
heap.Init(largeFilesHeap)
|
heap.Init(largeFilesHeap)
|
||||||
|
|
||||||
// Use worker pool for concurrent directory scanning
|
// Worker pool sized for I/O-bound scanning.
|
||||||
// For I/O-bound operations, use more workers than CPU count
|
|
||||||
numWorkers := runtime.NumCPU() * cpuMultiplier
|
numWorkers := runtime.NumCPU() * cpuMultiplier
|
||||||
if numWorkers < minWorkers {
|
if numWorkers < minWorkers {
|
||||||
numWorkers = minWorkers
|
numWorkers = minWorkers
|
||||||
@@ -57,17 +55,15 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
sem := make(chan struct{}, numWorkers)
|
sem := make(chan struct{}, numWorkers)
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
// Use channels to collect results without lock contention
|
// Collect results via channels.
|
||||||
entryChan := make(chan dirEntry, len(children))
|
entryChan := make(chan dirEntry, len(children))
|
||||||
largeFileChan := make(chan fileEntry, maxLargeFiles*2)
|
largeFileChan := make(chan fileEntry, maxLargeFiles*2)
|
||||||
|
|
||||||
// Start goroutines to collect from channels into heaps
|
|
||||||
var collectorWg sync.WaitGroup
|
var collectorWg sync.WaitGroup
|
||||||
collectorWg.Add(2)
|
collectorWg.Add(2)
|
||||||
go func() {
|
go func() {
|
||||||
defer collectorWg.Done()
|
defer collectorWg.Done()
|
||||||
for entry := range entryChan {
|
for entry := range entryChan {
|
||||||
// Maintain Top N Heap for entries
|
|
||||||
if entriesHeap.Len() < maxEntries {
|
if entriesHeap.Len() < maxEntries {
|
||||||
heap.Push(entriesHeap, entry)
|
heap.Push(entriesHeap, entry)
|
||||||
} else if entry.Size > (*entriesHeap)[0].Size {
|
} else if entry.Size > (*entriesHeap)[0].Size {
|
||||||
@@ -79,7 +75,6 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
go func() {
|
go func() {
|
||||||
defer collectorWg.Done()
|
defer collectorWg.Done()
|
||||||
for file := range largeFileChan {
|
for file := range largeFileChan {
|
||||||
// Maintain Top N Heap for large files
|
|
||||||
if largeFilesHeap.Len() < maxLargeFiles {
|
if largeFilesHeap.Len() < maxLargeFiles {
|
||||||
heap.Push(largeFilesHeap, file)
|
heap.Push(largeFilesHeap, file)
|
||||||
} else if file.Size > (*largeFilesHeap)[0].Size {
|
} else if file.Size > (*largeFilesHeap)[0].Size {
|
||||||
@@ -96,20 +91,15 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
for _, child := range children {
|
for _, child := range children {
|
||||||
fullPath := filepath.Join(root, child.Name())
|
fullPath := filepath.Join(root, child.Name())
|
||||||
|
|
||||||
// Skip symlinks to avoid following them into unexpected locations
|
// Skip symlinks to avoid following unexpected targets.
|
||||||
// Use Type() instead of IsDir() to check without following symlinks
|
|
||||||
if child.Type()&fs.ModeSymlink != 0 {
|
if child.Type()&fs.ModeSymlink != 0 {
|
||||||
// For symlinks, check if they point to a directory
|
|
||||||
targetInfo, err := os.Stat(fullPath)
|
targetInfo, err := os.Stat(fullPath)
|
||||||
isDir := false
|
isDir := false
|
||||||
if err == nil && targetInfo.IsDir() {
|
if err == nil && targetInfo.IsDir() {
|
||||||
isDir = true
|
isDir = true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get symlink size (we don't effectively count the target size towards parent to avoid double counting,
|
// Count link size only to avoid double-counting targets.
|
||||||
// or we just count the link size itself. Existing logic counts 'size' via getActualFileSize on the link info).
|
|
||||||
// Ideally we just want navigation.
|
|
||||||
// Re-fetching info for link itself if needed, but child.Info() does that.
|
|
||||||
info, err := child.Info()
|
info, err := child.Info()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
@@ -118,28 +108,26 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
atomic.AddInt64(&total, size)
|
atomic.AddInt64(&total, size)
|
||||||
|
|
||||||
entryChan <- dirEntry{
|
entryChan <- dirEntry{
|
||||||
Name: child.Name() + " →", // Add arrow to indicate symlink
|
Name: child.Name() + " →",
|
||||||
Path: fullPath,
|
Path: fullPath,
|
||||||
Size: size,
|
Size: size,
|
||||||
IsDir: isDir, // Allow navigation if target is directory
|
IsDir: isDir,
|
||||||
LastAccess: getLastAccessTimeFromInfo(info),
|
LastAccess: getLastAccessTimeFromInfo(info),
|
||||||
}
|
}
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
if child.IsDir() {
|
if child.IsDir() {
|
||||||
// Check if directory should be skipped based on user configuration
|
|
||||||
if defaultSkipDirs[child.Name()] {
|
if defaultSkipDirs[child.Name()] {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// In root directory, skip system directories completely
|
// Skip system dirs at root.
|
||||||
if isRootDir && skipSystemDirs[child.Name()] {
|
if isRootDir && skipSystemDirs[child.Name()] {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Special handling for ~/Library - reuse cache to avoid duplicate scanning
|
// ~/Library is scanned separately; reuse cache when possible.
|
||||||
// This is scanned separately in overview mode
|
|
||||||
if isHomeDir && child.Name() == "Library" {
|
if isHomeDir && child.Name() == "Library" {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(name, path string) {
|
go func(name, path string) {
|
||||||
@@ -148,14 +136,11 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
defer func() { <-sem }()
|
defer func() { <-sem }()
|
||||||
|
|
||||||
var size int64
|
var size int64
|
||||||
// Try overview cache first (from overview scan)
|
|
||||||
if cached, err := loadStoredOverviewSize(path); err == nil && cached > 0 {
|
if cached, err := loadStoredOverviewSize(path); err == nil && cached > 0 {
|
||||||
size = cached
|
size = cached
|
||||||
} else if cached, err := loadCacheFromDisk(path); err == nil {
|
} else if cached, err := loadCacheFromDisk(path); err == nil {
|
||||||
// Try disk cache
|
|
||||||
size = cached.TotalSize
|
size = cached.TotalSize
|
||||||
} else {
|
} else {
|
||||||
// No cache available, scan normally
|
|
||||||
size = calculateDirSizeConcurrent(path, largeFileChan, filesScanned, dirsScanned, bytesScanned, currentPath)
|
size = calculateDirSizeConcurrent(path, largeFileChan, filesScanned, dirsScanned, bytesScanned, currentPath)
|
||||||
}
|
}
|
||||||
atomic.AddInt64(&total, size)
|
atomic.AddInt64(&total, size)
|
||||||
@@ -172,7 +157,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// For folded directories, calculate size quickly without expanding
|
// Folded dirs: fast size without expanding.
|
||||||
if shouldFoldDirWithPath(child.Name(), fullPath) {
|
if shouldFoldDirWithPath(child.Name(), fullPath) {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(name, path string) {
|
go func(name, path string) {
|
||||||
@@ -180,10 +165,8 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
sem <- struct{}{}
|
sem <- struct{}{}
|
||||||
defer func() { <-sem }()
|
defer func() { <-sem }()
|
||||||
|
|
||||||
// Try du command first for folded dirs (much faster)
|
|
||||||
size, err := getDirectorySizeFromDu(path)
|
size, err := getDirectorySizeFromDu(path)
|
||||||
if err != nil || size <= 0 {
|
if err != nil || size <= 0 {
|
||||||
// Fallback to concurrent walk if du fails
|
|
||||||
size = calculateDirSizeFast(path, filesScanned, dirsScanned, bytesScanned, currentPath)
|
size = calculateDirSizeFast(path, filesScanned, dirsScanned, bytesScanned, currentPath)
|
||||||
}
|
}
|
||||||
atomic.AddInt64(&total, size)
|
atomic.AddInt64(&total, size)
|
||||||
@@ -194,13 +177,12 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
Path: path,
|
Path: path,
|
||||||
Size: size,
|
Size: size,
|
||||||
IsDir: true,
|
IsDir: true,
|
||||||
LastAccess: time.Time{}, // Lazy load when displayed
|
LastAccess: time.Time{},
|
||||||
}
|
}
|
||||||
}(child.Name(), fullPath)
|
}(child.Name(), fullPath)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Normal directory: full scan with detail
|
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(name, path string) {
|
go func(name, path string) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
@@ -216,7 +198,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
Path: path,
|
Path: path,
|
||||||
Size: size,
|
Size: size,
|
||||||
IsDir: true,
|
IsDir: true,
|
||||||
LastAccess: time.Time{}, // Lazy load when displayed
|
LastAccess: time.Time{},
|
||||||
}
|
}
|
||||||
}(child.Name(), fullPath)
|
}(child.Name(), fullPath)
|
||||||
continue
|
continue
|
||||||
@@ -226,7 +208,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
// Get actual disk usage for sparse files and cloud files
|
// Actual disk usage for sparse/cloud files.
|
||||||
size := getActualFileSize(fullPath, info)
|
size := getActualFileSize(fullPath, info)
|
||||||
atomic.AddInt64(&total, size)
|
atomic.AddInt64(&total, size)
|
||||||
atomic.AddInt64(filesScanned, 1)
|
atomic.AddInt64(filesScanned, 1)
|
||||||
@@ -239,7 +221,7 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
IsDir: false,
|
IsDir: false,
|
||||||
LastAccess: getLastAccessTimeFromInfo(info),
|
LastAccess: getLastAccessTimeFromInfo(info),
|
||||||
}
|
}
|
||||||
// Only track large files that are not code/text files
|
// Track large files only.
|
||||||
if !shouldSkipFileForLargeTracking(fullPath) && size >= minLargeFileSize {
|
if !shouldSkipFileForLargeTracking(fullPath) && size >= minLargeFileSize {
|
||||||
largeFileChan <- fileEntry{Name: child.Name(), Path: fullPath, Size: size}
|
largeFileChan <- fileEntry{Name: child.Name(), Path: fullPath, Size: size}
|
||||||
}
|
}
|
||||||
@@ -247,12 +229,12 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
|
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
|
|
||||||
// Close channels and wait for collectors to finish
|
// Close channels and wait for collectors.
|
||||||
close(entryChan)
|
close(entryChan)
|
||||||
close(largeFileChan)
|
close(largeFileChan)
|
||||||
collectorWg.Wait()
|
collectorWg.Wait()
|
||||||
|
|
||||||
// Convert Heaps to sorted slices (Descending order)
|
// Convert heaps to sorted slices (descending).
|
||||||
entries := make([]dirEntry, entriesHeap.Len())
|
entries := make([]dirEntry, entriesHeap.Len())
|
||||||
for i := len(entries) - 1; i >= 0; i-- {
|
for i := len(entries) - 1; i >= 0; i-- {
|
||||||
entries[i] = heap.Pop(entriesHeap).(dirEntry)
|
entries[i] = heap.Pop(entriesHeap).(dirEntry)
|
||||||
@@ -263,20 +245,11 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
largeFiles[i] = heap.Pop(largeFilesHeap).(fileEntry)
|
largeFiles[i] = heap.Pop(largeFilesHeap).(fileEntry)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Try to use Spotlight (mdfind) for faster large file discovery
|
// Use Spotlight for large files when available.
|
||||||
// This is a performance optimization that gracefully falls back to scan results
|
|
||||||
// if Spotlight is unavailable or fails. The fallback is intentionally silent
|
|
||||||
// because users only care about correct results, not the method used.
|
|
||||||
if spotlightFiles := findLargeFilesWithSpotlight(root, minLargeFileSize); len(spotlightFiles) > 0 {
|
if spotlightFiles := findLargeFilesWithSpotlight(root, minLargeFileSize); len(spotlightFiles) > 0 {
|
||||||
// Spotlight results are already sorted top N
|
|
||||||
// Use them in place of scanned large files
|
|
||||||
largeFiles = spotlightFiles
|
largeFiles = spotlightFiles
|
||||||
}
|
}
|
||||||
|
|
||||||
// Double check sorting consistency (Spotlight returns sorted, but heap pop handles scan results)
|
|
||||||
// If needed, we could re-sort largeFiles, but heap pop ensures ascending, and we filled reverse, so it's Descending.
|
|
||||||
// Spotlight returns Descending. So no extra sort needed for either.
|
|
||||||
|
|
||||||
return scanResult{
|
return scanResult{
|
||||||
Entries: entries,
|
Entries: entries,
|
||||||
LargeFiles: largeFiles,
|
LargeFiles: largeFiles,
|
||||||
@@ -285,21 +258,16 @@ func scanPathConcurrent(root string, filesScanned, dirsScanned, bytesScanned *in
|
|||||||
}
|
}
|
||||||
|
|
||||||
func shouldFoldDirWithPath(name, path string) bool {
|
func shouldFoldDirWithPath(name, path string) bool {
|
||||||
// Check basic fold list first
|
|
||||||
if foldDirs[name] {
|
if foldDirs[name] {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Special case: npm cache directories - fold all subdirectories
|
// Handle npm cache structure.
|
||||||
// This includes: .npm/_quick/*, .npm/_cacache/*, .npm/a-z/*, .tnpm/*
|
|
||||||
if strings.Contains(path, "/.npm/") || strings.Contains(path, "/.tnpm/") {
|
if strings.Contains(path, "/.npm/") || strings.Contains(path, "/.tnpm/") {
|
||||||
// Get the parent directory name
|
|
||||||
parent := filepath.Base(filepath.Dir(path))
|
parent := filepath.Base(filepath.Dir(path))
|
||||||
// If parent is a cache folder (_quick, _cacache, etc) or npm dir itself, fold it
|
|
||||||
if parent == ".npm" || parent == ".tnpm" || strings.HasPrefix(parent, "_") {
|
if parent == ".npm" || parent == ".tnpm" || strings.HasPrefix(parent, "_") {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
// Also fold single-letter subdirectories (npm cache structure like .npm/a/, .npm/b/)
|
|
||||||
if len(name) == 1 {
|
if len(name) == 1 {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
@@ -313,17 +281,14 @@ func shouldSkipFileForLargeTracking(path string) bool {
|
|||||||
return skipExtensions[ext]
|
return skipExtensions[ext]
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateDirSizeFast performs concurrent directory size calculation using os.ReadDir
|
// calculateDirSizeFast performs concurrent dir sizing using os.ReadDir.
|
||||||
// This is a faster fallback than filepath.WalkDir when du fails
|
|
||||||
func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *int64, currentPath *string) int64 {
|
func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *int64, currentPath *string) int64 {
|
||||||
var total int64
|
var total int64
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
// Create context with timeout
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
// Limit total concurrency for this walk
|
|
||||||
concurrency := runtime.NumCPU() * 4
|
concurrency := runtime.NumCPU() * 4
|
||||||
if concurrency > 64 {
|
if concurrency > 64 {
|
||||||
concurrency = 64
|
concurrency = 64
|
||||||
@@ -351,19 +316,16 @@ func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *
|
|||||||
|
|
||||||
for _, entry := range entries {
|
for _, entry := range entries {
|
||||||
if entry.IsDir() {
|
if entry.IsDir() {
|
||||||
// Directories: recurse concurrently
|
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
// Capture loop variable
|
|
||||||
subDir := filepath.Join(dirPath, entry.Name())
|
subDir := filepath.Join(dirPath, entry.Name())
|
||||||
go func(p string) {
|
go func(p string) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
sem <- struct{}{} // Acquire token
|
sem <- struct{}{}
|
||||||
defer func() { <-sem }() // Release token
|
defer func() { <-sem }()
|
||||||
walk(p)
|
walk(p)
|
||||||
}(subDir)
|
}(subDir)
|
||||||
atomic.AddInt64(dirsScanned, 1)
|
atomic.AddInt64(dirsScanned, 1)
|
||||||
} else {
|
} else {
|
||||||
// Files: process immediately
|
|
||||||
info, err := entry.Info()
|
info, err := entry.Info()
|
||||||
if err == nil {
|
if err == nil {
|
||||||
size := getActualFileSize(filepath.Join(dirPath, entry.Name()), info)
|
size := getActualFileSize(filepath.Join(dirPath, entry.Name()), info)
|
||||||
@@ -388,9 +350,8 @@ func calculateDirSizeFast(root string, filesScanned, dirsScanned, bytesScanned *
|
|||||||
return total
|
return total
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use Spotlight (mdfind) to quickly find large files in a directory
|
// Use Spotlight (mdfind) to quickly find large files.
|
||||||
func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
||||||
// mdfind query: files >= minSize in the specified directory
|
|
||||||
query := fmt.Sprintf("kMDItemFSSize >= %d", minSize)
|
query := fmt.Sprintf("kMDItemFSSize >= %d", minSize)
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), mdlsTimeout)
|
ctx, cancel := context.WithTimeout(context.Background(), mdlsTimeout)
|
||||||
@@ -399,7 +360,6 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
|||||||
cmd := exec.CommandContext(ctx, "mdfind", "-onlyin", root, query)
|
cmd := exec.CommandContext(ctx, "mdfind", "-onlyin", root, query)
|
||||||
output, err := cmd.Output()
|
output, err := cmd.Output()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Fallback: mdfind not available or failed
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -411,28 +371,26 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter out code files first (cheapest check, no I/O)
|
// Filter code files first (cheap).
|
||||||
if shouldSkipFileForLargeTracking(line) {
|
if shouldSkipFileForLargeTracking(line) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter out files in folded directories (cheap string check)
|
// Filter folded directories (cheap string check).
|
||||||
if isInFoldedDir(line) {
|
if isInFoldedDir(line) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use Lstat instead of Stat (faster, doesn't follow symlinks)
|
|
||||||
info, err := os.Lstat(line)
|
info, err := os.Lstat(line)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Skip if it's a directory or symlink
|
|
||||||
if info.IsDir() || info.Mode()&os.ModeSymlink != 0 {
|
if info.IsDir() || info.Mode()&os.ModeSymlink != 0 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get actual disk usage for sparse files and cloud files
|
// Actual disk usage for sparse/cloud files.
|
||||||
actualSize := getActualFileSize(line, info)
|
actualSize := getActualFileSize(line, info)
|
||||||
files = append(files, fileEntry{
|
files = append(files, fileEntry{
|
||||||
Name: filepath.Base(line),
|
Name: filepath.Base(line),
|
||||||
@@ -441,12 +399,11 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Sort by size (descending)
|
// Sort by size (descending).
|
||||||
sort.Slice(files, func(i, j int) bool {
|
sort.Slice(files, func(i, j int) bool {
|
||||||
return files[i].Size > files[j].Size
|
return files[i].Size > files[j].Size
|
||||||
})
|
})
|
||||||
|
|
||||||
// Return top N
|
|
||||||
if len(files) > maxLargeFiles {
|
if len(files) > maxLargeFiles {
|
||||||
files = files[:maxLargeFiles]
|
files = files[:maxLargeFiles]
|
||||||
}
|
}
|
||||||
@@ -454,9 +411,8 @@ func findLargeFilesWithSpotlight(root string, minSize int64) []fileEntry {
|
|||||||
return files
|
return files
|
||||||
}
|
}
|
||||||
|
|
||||||
// isInFoldedDir checks if a path is inside a folded directory (optimized)
|
// isInFoldedDir checks if a path is inside a folded directory.
|
||||||
func isInFoldedDir(path string) bool {
|
func isInFoldedDir(path string) bool {
|
||||||
// Split path into components for faster checking
|
|
||||||
parts := strings.Split(path, string(os.PathSeparator))
|
parts := strings.Split(path, string(os.PathSeparator))
|
||||||
for _, part := range parts {
|
for _, part := range parts {
|
||||||
if foldDirs[part] {
|
if foldDirs[part] {
|
||||||
@@ -467,7 +423,6 @@ func isInFoldedDir(path string) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, filesScanned, dirsScanned, bytesScanned *int64, currentPath *string) int64 {
|
func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, filesScanned, dirsScanned, bytesScanned *int64, currentPath *string) int64 {
|
||||||
// Read immediate children
|
|
||||||
children, err := os.ReadDir(root)
|
children, err := os.ReadDir(root)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0
|
return 0
|
||||||
@@ -476,7 +431,7 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
|||||||
var total int64
|
var total int64
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
// Limit concurrent subdirectory scans to avoid too many goroutines
|
// Limit concurrent subdirectory scans.
|
||||||
maxConcurrent := runtime.NumCPU() * 2
|
maxConcurrent := runtime.NumCPU() * 2
|
||||||
if maxConcurrent > maxDirWorkers {
|
if maxConcurrent > maxDirWorkers {
|
||||||
maxConcurrent = maxDirWorkers
|
maxConcurrent = maxDirWorkers
|
||||||
@@ -486,9 +441,7 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
|||||||
for _, child := range children {
|
for _, child := range children {
|
||||||
fullPath := filepath.Join(root, child.Name())
|
fullPath := filepath.Join(root, child.Name())
|
||||||
|
|
||||||
// Skip symlinks to avoid following them into unexpected locations
|
|
||||||
if child.Type()&fs.ModeSymlink != 0 {
|
if child.Type()&fs.ModeSymlink != 0 {
|
||||||
// For symlinks, just count their size without following
|
|
||||||
info, err := child.Info()
|
info, err := child.Info()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
@@ -501,9 +454,7 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
|||||||
}
|
}
|
||||||
|
|
||||||
if child.IsDir() {
|
if child.IsDir() {
|
||||||
// Check if this is a folded directory
|
|
||||||
if shouldFoldDirWithPath(child.Name(), fullPath) {
|
if shouldFoldDirWithPath(child.Name(), fullPath) {
|
||||||
// Use du for folded directories (much faster)
|
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(path string) {
|
go func(path string) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
@@ -517,7 +468,6 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Recursively scan subdirectory in parallel
|
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(path string) {
|
go func(path string) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
@@ -531,7 +481,6 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Handle files
|
|
||||||
info, err := child.Info()
|
info, err := child.Info()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
@@ -542,12 +491,11 @@ func calculateDirSizeConcurrent(root string, largeFileChan chan<- fileEntry, fil
|
|||||||
atomic.AddInt64(filesScanned, 1)
|
atomic.AddInt64(filesScanned, 1)
|
||||||
atomic.AddInt64(bytesScanned, size)
|
atomic.AddInt64(bytesScanned, size)
|
||||||
|
|
||||||
// Track large files
|
|
||||||
if !shouldSkipFileForLargeTracking(fullPath) && size >= minLargeFileSize {
|
if !shouldSkipFileForLargeTracking(fullPath) && size >= minLargeFileSize {
|
||||||
largeFileChan <- fileEntry{Name: child.Name(), Path: fullPath, Size: size}
|
largeFileChan <- fileEntry{Name: child.Name(), Path: fullPath, Size: size}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update current path occasionally to prevent UI jitter
|
// Update current path occasionally to prevent UI jitter.
|
||||||
if currentPath != nil && atomic.LoadInt64(filesScanned)%int64(batchUpdateSize) == 0 {
|
if currentPath != nil && atomic.LoadInt64(filesScanned)%int64(batchUpdateSize) == 0 {
|
||||||
*currentPath = fullPath
|
*currentPath = fullPath
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import (
|
|||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
)
|
)
|
||||||
|
|
||||||
// View renders the TUI display.
|
// View renders the TUI.
|
||||||
func (m model) View() string {
|
func (m model) View() string {
|
||||||
var b strings.Builder
|
var b strings.Builder
|
||||||
fmt.Fprintln(&b)
|
fmt.Fprintln(&b)
|
||||||
@@ -16,7 +16,6 @@ func (m model) View() string {
|
|||||||
if m.inOverviewMode() {
|
if m.inOverviewMode() {
|
||||||
fmt.Fprintf(&b, "%sAnalyze Disk%s\n", colorPurpleBold, colorReset)
|
fmt.Fprintf(&b, "%sAnalyze Disk%s\n", colorPurpleBold, colorReset)
|
||||||
if m.overviewScanning {
|
if m.overviewScanning {
|
||||||
// Check if we're in initial scan (all entries are pending)
|
|
||||||
allPending := true
|
allPending := true
|
||||||
for _, entry := range m.entries {
|
for _, entry := range m.entries {
|
||||||
if entry.Size >= 0 {
|
if entry.Size >= 0 {
|
||||||
@@ -26,19 +25,16 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if allPending {
|
if allPending {
|
||||||
// Show prominent loading screen for initial scan
|
|
||||||
fmt.Fprintf(&b, "%s%s%s%s Analyzing disk usage, please wait...%s\n",
|
fmt.Fprintf(&b, "%s%s%s%s Analyzing disk usage, please wait...%s\n",
|
||||||
colorCyan, colorBold,
|
colorCyan, colorBold,
|
||||||
spinnerFrames[m.spinner],
|
spinnerFrames[m.spinner],
|
||||||
colorReset, colorReset)
|
colorReset, colorReset)
|
||||||
return b.String()
|
return b.String()
|
||||||
} else {
|
} else {
|
||||||
// Progressive scanning - show subtle indicator
|
|
||||||
fmt.Fprintf(&b, "%sSelect a location to explore:%s ", colorGray, colorReset)
|
fmt.Fprintf(&b, "%sSelect a location to explore:%s ", colorGray, colorReset)
|
||||||
fmt.Fprintf(&b, "%s%s%s%s Scanning...\n\n", colorCyan, colorBold, spinnerFrames[m.spinner], colorReset)
|
fmt.Fprintf(&b, "%s%s%s%s Scanning...\n\n", colorCyan, colorBold, spinnerFrames[m.spinner], colorReset)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Check if there are still pending items
|
|
||||||
hasPending := false
|
hasPending := false
|
||||||
for _, entry := range m.entries {
|
for _, entry := range m.entries {
|
||||||
if entry.Size < 0 {
|
if entry.Size < 0 {
|
||||||
@@ -62,7 +58,6 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if m.deleting {
|
if m.deleting {
|
||||||
// Show delete progress
|
|
||||||
count := int64(0)
|
count := int64(0)
|
||||||
if m.deleteCount != nil {
|
if m.deleteCount != nil {
|
||||||
count = atomic.LoadInt64(m.deleteCount)
|
count = atomic.LoadInt64(m.deleteCount)
|
||||||
@@ -130,7 +125,6 @@ func (m model) View() string {
|
|||||||
sizeColor := colorGray
|
sizeColor := colorGray
|
||||||
numColor := ""
|
numColor := ""
|
||||||
|
|
||||||
// Check if this item is multi-selected (by path, not index)
|
|
||||||
isMultiSelected := m.largeMultiSelected != nil && m.largeMultiSelected[file.Path]
|
isMultiSelected := m.largeMultiSelected != nil && m.largeMultiSelected[file.Path]
|
||||||
selectIcon := "○"
|
selectIcon := "○"
|
||||||
if isMultiSelected {
|
if isMultiSelected {
|
||||||
@@ -164,8 +158,7 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
totalSize := m.totalSize
|
totalSize := m.totalSize
|
||||||
// For overview mode, use a fixed small width since path names are short
|
// Overview paths are short; fixed width keeps layout stable.
|
||||||
// (~/Downloads, ~/Library, etc. - max ~15 chars)
|
|
||||||
nameWidth := 20
|
nameWidth := 20
|
||||||
for idx, entry := range m.entries {
|
for idx, entry := range m.entries {
|
||||||
icon := "📁"
|
icon := "📁"
|
||||||
@@ -217,12 +210,10 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
displayIndex := idx + 1
|
displayIndex := idx + 1
|
||||||
|
|
||||||
// Priority: cleanable > unused time
|
|
||||||
var hintLabel string
|
var hintLabel string
|
||||||
if entry.IsDir && isCleanableDir(entry.Path) {
|
if entry.IsDir && isCleanableDir(entry.Path) {
|
||||||
hintLabel = fmt.Sprintf("%s🧹%s", colorYellow, colorReset)
|
hintLabel = fmt.Sprintf("%s🧹%s", colorYellow, colorReset)
|
||||||
} else {
|
} else {
|
||||||
// For overview mode, get access time on-demand if not set
|
|
||||||
lastAccess := entry.LastAccess
|
lastAccess := entry.LastAccess
|
||||||
if lastAccess.IsZero() && entry.Path != "" {
|
if lastAccess.IsZero() && entry.Path != "" {
|
||||||
lastAccess = getLastAccessTime(entry.Path)
|
lastAccess = getLastAccessTime(entry.Path)
|
||||||
@@ -243,7 +234,6 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Normal mode with sizes and progress bars
|
|
||||||
maxSize := int64(1)
|
maxSize := int64(1)
|
||||||
for _, entry := range m.entries {
|
for _, entry := range m.entries {
|
||||||
if entry.Size > maxSize {
|
if entry.Size > maxSize {
|
||||||
@@ -272,14 +262,11 @@ func (m model) View() string {
|
|||||||
name := trimNameWithWidth(entry.Name, nameWidth)
|
name := trimNameWithWidth(entry.Name, nameWidth)
|
||||||
paddedName := padName(name, nameWidth)
|
paddedName := padName(name, nameWidth)
|
||||||
|
|
||||||
// Calculate percentage
|
|
||||||
percent := float64(entry.Size) / float64(m.totalSize) * 100
|
percent := float64(entry.Size) / float64(m.totalSize) * 100
|
||||||
percentStr := fmt.Sprintf("%5.1f%%", percent)
|
percentStr := fmt.Sprintf("%5.1f%%", percent)
|
||||||
|
|
||||||
// Get colored progress bar
|
|
||||||
bar := coloredProgressBar(entry.Size, maxSize, percent)
|
bar := coloredProgressBar(entry.Size, maxSize, percent)
|
||||||
|
|
||||||
// Color the size based on magnitude
|
|
||||||
var sizeColor string
|
var sizeColor string
|
||||||
if percent >= 50 {
|
if percent >= 50 {
|
||||||
sizeColor = colorRed
|
sizeColor = colorRed
|
||||||
@@ -291,7 +278,6 @@ func (m model) View() string {
|
|||||||
sizeColor = colorGray
|
sizeColor = colorGray
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if this item is multi-selected (by path, not index)
|
|
||||||
isMultiSelected := m.multiSelected != nil && m.multiSelected[entry.Path]
|
isMultiSelected := m.multiSelected != nil && m.multiSelected[entry.Path]
|
||||||
selectIcon := "○"
|
selectIcon := "○"
|
||||||
nameColor := ""
|
nameColor := ""
|
||||||
@@ -300,7 +286,6 @@ func (m model) View() string {
|
|||||||
nameColor = colorGreen
|
nameColor = colorGreen
|
||||||
}
|
}
|
||||||
|
|
||||||
// Keep chart columns aligned even when arrow is shown
|
|
||||||
entryPrefix := " "
|
entryPrefix := " "
|
||||||
nameSegment := fmt.Sprintf("%s %s", icon, paddedName)
|
nameSegment := fmt.Sprintf("%s %s", icon, paddedName)
|
||||||
if nameColor != "" {
|
if nameColor != "" {
|
||||||
@@ -320,12 +305,10 @@ func (m model) View() string {
|
|||||||
|
|
||||||
displayIndex := idx + 1
|
displayIndex := idx + 1
|
||||||
|
|
||||||
// Priority: cleanable > unused time
|
|
||||||
var hintLabel string
|
var hintLabel string
|
||||||
if entry.IsDir && isCleanableDir(entry.Path) {
|
if entry.IsDir && isCleanableDir(entry.Path) {
|
||||||
hintLabel = fmt.Sprintf("%s🧹%s", colorYellow, colorReset)
|
hintLabel = fmt.Sprintf("%s🧹%s", colorYellow, colorReset)
|
||||||
} else {
|
} else {
|
||||||
// Get access time on-demand if not set
|
|
||||||
lastAccess := entry.LastAccess
|
lastAccess := entry.LastAccess
|
||||||
if lastAccess.IsZero() && entry.Path != "" {
|
if lastAccess.IsZero() && entry.Path != "" {
|
||||||
lastAccess = getLastAccessTime(entry.Path)
|
lastAccess = getLastAccessTime(entry.Path)
|
||||||
@@ -351,7 +334,6 @@ func (m model) View() string {
|
|||||||
|
|
||||||
fmt.Fprintln(&b)
|
fmt.Fprintln(&b)
|
||||||
if m.inOverviewMode() {
|
if m.inOverviewMode() {
|
||||||
// Show ← Back if there's history (entered from a parent directory)
|
|
||||||
if len(m.history) > 0 {
|
if len(m.history) > 0 {
|
||||||
fmt.Fprintf(&b, "%s↑↓←→ | Enter | R Refresh | O Open | F File | ← Back | Q Quit%s\n", colorGray, colorReset)
|
fmt.Fprintf(&b, "%s↑↓←→ | Enter | R Refresh | O Open | F File | ← Back | Q Quit%s\n", colorGray, colorReset)
|
||||||
} else {
|
} else {
|
||||||
@@ -383,12 +365,10 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
if m.deleteConfirm && m.deleteTarget != nil {
|
if m.deleteConfirm && m.deleteTarget != nil {
|
||||||
fmt.Fprintln(&b)
|
fmt.Fprintln(&b)
|
||||||
// Show multi-selection delete info if applicable
|
|
||||||
var deleteCount int
|
var deleteCount int
|
||||||
var totalDeleteSize int64
|
var totalDeleteSize int64
|
||||||
if m.showLargeFiles && len(m.largeMultiSelected) > 0 {
|
if m.showLargeFiles && len(m.largeMultiSelected) > 0 {
|
||||||
deleteCount = len(m.largeMultiSelected)
|
deleteCount = len(m.largeMultiSelected)
|
||||||
// Calculate total size by looking up each selected path
|
|
||||||
for path := range m.largeMultiSelected {
|
for path := range m.largeMultiSelected {
|
||||||
for _, file := range m.largeFiles {
|
for _, file := range m.largeFiles {
|
||||||
if file.Path == path {
|
if file.Path == path {
|
||||||
@@ -399,7 +379,6 @@ func (m model) View() string {
|
|||||||
}
|
}
|
||||||
} else if !m.showLargeFiles && len(m.multiSelected) > 0 {
|
} else if !m.showLargeFiles && len(m.multiSelected) > 0 {
|
||||||
deleteCount = len(m.multiSelected)
|
deleteCount = len(m.multiSelected)
|
||||||
// Calculate total size by looking up each selected path
|
|
||||||
for path := range m.multiSelected {
|
for path := range m.multiSelected {
|
||||||
for _, entry := range m.entries {
|
for _, entry := range m.entries {
|
||||||
if entry.Path == path {
|
if entry.Path == path {
|
||||||
@@ -425,27 +404,24 @@ func (m model) View() string {
|
|||||||
return b.String()
|
return b.String()
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateViewport computes the number of visible items based on terminal height.
|
// calculateViewport returns visible rows for the current terminal height.
|
||||||
func calculateViewport(termHeight int, isLargeFiles bool) int {
|
func calculateViewport(termHeight int, isLargeFiles bool) int {
|
||||||
if termHeight <= 0 {
|
if termHeight <= 0 {
|
||||||
// Terminal height unknown, use default
|
|
||||||
return defaultViewport
|
return defaultViewport
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate reserved space for UI elements
|
reserved := 6 // Header + footer
|
||||||
reserved := 6 // header (3-4 lines) + footer (2 lines)
|
|
||||||
if isLargeFiles {
|
if isLargeFiles {
|
||||||
reserved = 5 // Large files view has less overhead
|
reserved = 5
|
||||||
}
|
}
|
||||||
|
|
||||||
available := termHeight - reserved
|
available := termHeight - reserved
|
||||||
|
|
||||||
// Ensure minimum and maximum bounds
|
|
||||||
if available < 1 {
|
if available < 1 {
|
||||||
return 1 // Minimum 1 line for very short terminals
|
return 1
|
||||||
}
|
}
|
||||||
if available > 30 {
|
if available > 30 {
|
||||||
return 30 // Maximum 30 lines to avoid information overload
|
return 30
|
||||||
}
|
}
|
||||||
|
|
||||||
return available
|
return available
|
||||||
|
|||||||
@@ -72,7 +72,7 @@ func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.metrics = msg.data
|
m.metrics = msg.data
|
||||||
m.lastUpdated = msg.data.CollectedAt
|
m.lastUpdated = msg.data.CollectedAt
|
||||||
m.collecting = false
|
m.collecting = false
|
||||||
// Mark ready after first successful data collection
|
// Mark ready after first successful data collection.
|
||||||
if !m.ready {
|
if !m.ready {
|
||||||
m.ready = true
|
m.ready = true
|
||||||
}
|
}
|
||||||
@@ -126,7 +126,7 @@ func animTick() tea.Cmd {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func animTickWithSpeed(cpuUsage float64) tea.Cmd {
|
func animTickWithSpeed(cpuUsage float64) tea.Cmd {
|
||||||
// Higher CPU = faster animation (50ms to 300ms)
|
// Higher CPU = faster animation.
|
||||||
interval := 300 - int(cpuUsage*2.5)
|
interval := 300 - int(cpuUsage*2.5)
|
||||||
if interval < 50 {
|
if interval < 50 {
|
||||||
interval = 50
|
interval = 50
|
||||||
|
|||||||
@@ -141,16 +141,16 @@ type BluetoothDevice struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Collector struct {
|
type Collector struct {
|
||||||
// Static Cache (Collected once at startup)
|
// Static cache.
|
||||||
cachedHW HardwareInfo
|
cachedHW HardwareInfo
|
||||||
lastHWAt time.Time
|
lastHWAt time.Time
|
||||||
hasStatic bool
|
hasStatic bool
|
||||||
|
|
||||||
// Slow Cache (Collected every 30s-1m)
|
// Slow cache (30s-1m).
|
||||||
lastBTAt time.Time
|
lastBTAt time.Time
|
||||||
lastBT []BluetoothDevice
|
lastBT []BluetoothDevice
|
||||||
|
|
||||||
// Fast Metrics (Collected every 1 second)
|
// Fast metrics (1s).
|
||||||
prevNet map[string]net.IOCountersStat
|
prevNet map[string]net.IOCountersStat
|
||||||
lastNetAt time.Time
|
lastNetAt time.Time
|
||||||
lastGPUAt time.Time
|
lastGPUAt time.Time
|
||||||
@@ -168,9 +168,7 @@ func NewCollector() *Collector {
|
|||||||
func (c *Collector) Collect() (MetricsSnapshot, error) {
|
func (c *Collector) Collect() (MetricsSnapshot, error) {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
|
|
||||||
// Start host info collection early (it's fast but good to parallelize if possible,
|
// Host info is cached by gopsutil; fetch once.
|
||||||
// but it returns a struct needed for result, so we can just run it here or in parallel)
|
|
||||||
// host.Info is usually cached by gopsutil but let's just call it.
|
|
||||||
hostInfo, _ := host.Info()
|
hostInfo, _ := host.Info()
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@@ -192,7 +190,7 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
|||||||
topProcs []ProcessInfo
|
topProcs []ProcessInfo
|
||||||
)
|
)
|
||||||
|
|
||||||
// Helper to launch concurrent collection
|
// Helper to launch concurrent collection.
|
||||||
collect := func(fn func() error) {
|
collect := func(fn func() error) {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
@@ -209,7 +207,7 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Launch all independent collection tasks
|
// Launch independent collection tasks.
|
||||||
collect(func() (err error) { cpuStats, err = collectCPU(); return })
|
collect(func() (err error) { cpuStats, err = collectCPU(); return })
|
||||||
collect(func() (err error) { memStats, err = collectMemory(); return })
|
collect(func() (err error) { memStats, err = collectMemory(); return })
|
||||||
collect(func() (err error) { diskStats, err = collectDisks(); return })
|
collect(func() (err error) { diskStats, err = collectDisks(); return })
|
||||||
@@ -221,7 +219,7 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
|||||||
collect(func() (err error) { sensorStats, _ = collectSensors(); return nil })
|
collect(func() (err error) { sensorStats, _ = collectSensors(); return nil })
|
||||||
collect(func() (err error) { gpuStats, err = c.collectGPU(now); return })
|
collect(func() (err error) { gpuStats, err = c.collectGPU(now); return })
|
||||||
collect(func() (err error) {
|
collect(func() (err error) {
|
||||||
// Bluetooth is slow, cache for 30s
|
// Bluetooth is slow; cache for 30s.
|
||||||
if now.Sub(c.lastBTAt) > 30*time.Second || len(c.lastBT) == 0 {
|
if now.Sub(c.lastBTAt) > 30*time.Second || len(c.lastBT) == 0 {
|
||||||
btStats = c.collectBluetooth(now)
|
btStats = c.collectBluetooth(now)
|
||||||
c.lastBT = btStats
|
c.lastBT = btStats
|
||||||
@@ -233,12 +231,11 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
|||||||
})
|
})
|
||||||
collect(func() (err error) { topProcs = collectTopProcesses(); return nil })
|
collect(func() (err error) { topProcs = collectTopProcesses(); return nil })
|
||||||
|
|
||||||
// Wait for all to complete
|
// Wait for all to complete.
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
|
|
||||||
// Dependent tasks (must run after others)
|
// Dependent tasks (post-collect).
|
||||||
// Dependent tasks (must run after others)
|
// Cache hardware info as it's expensive and rarely changes.
|
||||||
// Cache hardware info as it's expensive and rarely changes
|
|
||||||
if !c.hasStatic || now.Sub(c.lastHWAt) > 10*time.Minute {
|
if !c.hasStatic || now.Sub(c.lastHWAt) > 10*time.Minute {
|
||||||
c.cachedHW = collectHardware(memStats.Total, diskStats)
|
c.cachedHW = collectHardware(memStats.Total, diskStats)
|
||||||
c.lastHWAt = now
|
c.lastHWAt = now
|
||||||
@@ -272,8 +269,6 @@ func (c *Collector) Collect() (MetricsSnapshot, error) {
|
|||||||
}, mergeErr
|
}, mergeErr
|
||||||
}
|
}
|
||||||
|
|
||||||
// Utility functions
|
|
||||||
|
|
||||||
func runCmd(ctx context.Context, name string, args ...string) (string, error) {
|
func runCmd(ctx context.Context, name string, args ...string) (string, error) {
|
||||||
cmd := exec.CommandContext(ctx, name, args...)
|
cmd := exec.CommandContext(ctx, name, args...)
|
||||||
output, err := cmd.Output()
|
output, err := cmd.Output()
|
||||||
@@ -289,11 +284,9 @@ func commandExists(name string) bool {
|
|||||||
}
|
}
|
||||||
defer func() {
|
defer func() {
|
||||||
if r := recover(); r != nil {
|
if r := recover(); r != nil {
|
||||||
// If LookPath panics due to permissions or platform quirks, act as if the command is missing.
|
// Treat LookPath panics as "missing".
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
_, err := exec.LookPath(name)
|
_, err := exec.LookPath(name)
|
||||||
return err == nil
|
return err == nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// humanBytes is defined in view.go to avoid duplication
|
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
// Package-level cache for heavy system_profiler data
|
// Cache for heavy system_profiler output.
|
||||||
lastPowerAt time.Time
|
lastPowerAt time.Time
|
||||||
cachedPower string
|
cachedPower string
|
||||||
powerCacheTTL = 30 * time.Second
|
powerCacheTTL = 30 * time.Second
|
||||||
@@ -24,15 +24,15 @@ var (
|
|||||||
func collectBatteries() (batts []BatteryStatus, err error) {
|
func collectBatteries() (batts []BatteryStatus, err error) {
|
||||||
defer func() {
|
defer func() {
|
||||||
if r := recover(); r != nil {
|
if r := recover(); r != nil {
|
||||||
// Swallow panics from platform-specific battery probes to keep the UI alive.
|
// Swallow panics to keep UI alive.
|
||||||
err = fmt.Errorf("battery collection failed: %v", r)
|
err = fmt.Errorf("battery collection failed: %v", r)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
// macOS: pmset (fast, for real-time percentage/status)
|
// macOS: pmset for real-time percentage/status.
|
||||||
if runtime.GOOS == "darwin" && commandExists("pmset") {
|
if runtime.GOOS == "darwin" && commandExists("pmset") {
|
||||||
if out, err := runCmd(context.Background(), "pmset", "-g", "batt"); err == nil {
|
if out, err := runCmd(context.Background(), "pmset", "-g", "batt"); err == nil {
|
||||||
// Get heavy info (health, cycles) from cached system_profiler
|
// Health/cycles from cached system_profiler.
|
||||||
health, cycles := getCachedPowerData()
|
health, cycles := getCachedPowerData()
|
||||||
if batts := parsePMSet(out, health, cycles); len(batts) > 0 {
|
if batts := parsePMSet(out, health, cycles); len(batts) > 0 {
|
||||||
return batts, nil
|
return batts, nil
|
||||||
@@ -40,7 +40,7 @@ func collectBatteries() (batts []BatteryStatus, err error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Linux: /sys/class/power_supply
|
// Linux: /sys/class/power_supply.
|
||||||
matches, _ := filepath.Glob("/sys/class/power_supply/BAT*/capacity")
|
matches, _ := filepath.Glob("/sys/class/power_supply/BAT*/capacity")
|
||||||
for _, capFile := range matches {
|
for _, capFile := range matches {
|
||||||
statusFile := filepath.Join(filepath.Dir(capFile), "status")
|
statusFile := filepath.Join(filepath.Dir(capFile), "status")
|
||||||
@@ -73,9 +73,8 @@ func parsePMSet(raw string, health string, cycles int) []BatteryStatus {
|
|||||||
var timeLeft string
|
var timeLeft string
|
||||||
|
|
||||||
for _, line := range lines {
|
for _, line := range lines {
|
||||||
// Check for time remaining
|
// Time remaining.
|
||||||
if strings.Contains(line, "remaining") {
|
if strings.Contains(line, "remaining") {
|
||||||
// Extract time like "1:30 remaining"
|
|
||||||
parts := strings.Fields(line)
|
parts := strings.Fields(line)
|
||||||
for i, p := range parts {
|
for i, p := range parts {
|
||||||
if p == "remaining" && i > 0 {
|
if p == "remaining" && i > 0 {
|
||||||
@@ -121,7 +120,7 @@ func parsePMSet(raw string, health string, cycles int) []BatteryStatus {
|
|||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// getCachedPowerData returns condition, cycles, and fan speed from cached system_profiler output.
|
// getCachedPowerData returns condition and cycles from cached system_profiler.
|
||||||
func getCachedPowerData() (health string, cycles int) {
|
func getCachedPowerData() (health string, cycles int) {
|
||||||
out := getSystemPowerOutput()
|
out := getSystemPowerOutput()
|
||||||
if out == "" {
|
if out == "" {
|
||||||
@@ -173,7 +172,7 @@ func collectThermal() ThermalStatus {
|
|||||||
|
|
||||||
var thermal ThermalStatus
|
var thermal ThermalStatus
|
||||||
|
|
||||||
// Get fan info and adapter power from cached system_profiler
|
// Fan info from cached system_profiler.
|
||||||
out := getSystemPowerOutput()
|
out := getSystemPowerOutput()
|
||||||
if out != "" {
|
if out != "" {
|
||||||
lines := strings.Split(out, "\n")
|
lines := strings.Split(out, "\n")
|
||||||
@@ -181,7 +180,6 @@ func collectThermal() ThermalStatus {
|
|||||||
lower := strings.ToLower(line)
|
lower := strings.ToLower(line)
|
||||||
if strings.Contains(lower, "fan") && strings.Contains(lower, "speed") {
|
if strings.Contains(lower, "fan") && strings.Contains(lower, "speed") {
|
||||||
if _, after, found := strings.Cut(line, ":"); found {
|
if _, after, found := strings.Cut(line, ":"); found {
|
||||||
// Extract number from string like "1200 RPM"
|
|
||||||
numStr := strings.TrimSpace(after)
|
numStr := strings.TrimSpace(after)
|
||||||
numStr, _, _ = strings.Cut(numStr, " ")
|
numStr, _, _ = strings.Cut(numStr, " ")
|
||||||
thermal.FanSpeed, _ = strconv.Atoi(numStr)
|
thermal.FanSpeed, _ = strconv.Atoi(numStr)
|
||||||
@@ -190,7 +188,7 @@ func collectThermal() ThermalStatus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get power metrics from ioreg (fast, real-time data)
|
// Power metrics from ioreg (fast, real-time).
|
||||||
ctxPower, cancelPower := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
ctxPower, cancelPower := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||||
defer cancelPower()
|
defer cancelPower()
|
||||||
if out, err := runCmd(ctxPower, "ioreg", "-rn", "AppleSmartBattery"); err == nil {
|
if out, err := runCmd(ctxPower, "ioreg", "-rn", "AppleSmartBattery"); err == nil {
|
||||||
@@ -198,8 +196,7 @@ func collectThermal() ThermalStatus {
|
|||||||
for _, line := range lines {
|
for _, line := range lines {
|
||||||
line = strings.TrimSpace(line)
|
line = strings.TrimSpace(line)
|
||||||
|
|
||||||
// Get battery temperature
|
// Battery temperature ("Temperature" = 3055).
|
||||||
// Matches: "Temperature" = 3055 (note: space before =)
|
|
||||||
if _, after, found := strings.Cut(line, "\"Temperature\" = "); found {
|
if _, after, found := strings.Cut(line, "\"Temperature\" = "); found {
|
||||||
valStr := strings.TrimSpace(after)
|
valStr := strings.TrimSpace(after)
|
||||||
if tempRaw, err := strconv.Atoi(valStr); err == nil && tempRaw > 0 {
|
if tempRaw, err := strconv.Atoi(valStr); err == nil && tempRaw > 0 {
|
||||||
@@ -207,13 +204,10 @@ func collectThermal() ThermalStatus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get adapter power (Watts)
|
// Adapter power (Watts) from current adapter.
|
||||||
// Read from current adapter: "AdapterDetails" = {"Watts"=140...}
|
|
||||||
// Skip historical data: "AppleRawAdapterDetails" = ({Watts=90}, {Watts=140})
|
|
||||||
if strings.Contains(line, "\"AdapterDetails\" = {") && !strings.Contains(line, "AppleRaw") {
|
if strings.Contains(line, "\"AdapterDetails\" = {") && !strings.Contains(line, "AppleRaw") {
|
||||||
if _, after, found := strings.Cut(line, "\"Watts\"="); found {
|
if _, after, found := strings.Cut(line, "\"Watts\"="); found {
|
||||||
valStr := strings.TrimSpace(after)
|
valStr := strings.TrimSpace(after)
|
||||||
// Remove trailing characters like , or }
|
|
||||||
valStr, _, _ = strings.Cut(valStr, ",")
|
valStr, _, _ = strings.Cut(valStr, ",")
|
||||||
valStr, _, _ = strings.Cut(valStr, "}")
|
valStr, _, _ = strings.Cut(valStr, "}")
|
||||||
valStr = strings.TrimSpace(valStr)
|
valStr = strings.TrimSpace(valStr)
|
||||||
@@ -223,8 +217,7 @@ func collectThermal() ThermalStatus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get system power consumption (mW -> W)
|
// System power consumption (mW -> W).
|
||||||
// Matches: "SystemPowerIn"=12345
|
|
||||||
if _, after, found := strings.Cut(line, "\"SystemPowerIn\"="); found {
|
if _, after, found := strings.Cut(line, "\"SystemPowerIn\"="); found {
|
||||||
valStr := strings.TrimSpace(after)
|
valStr := strings.TrimSpace(after)
|
||||||
valStr, _, _ = strings.Cut(valStr, ",")
|
valStr, _, _ = strings.Cut(valStr, ",")
|
||||||
@@ -235,8 +228,7 @@ func collectThermal() ThermalStatus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get battery power (mW -> W, positive = discharging)
|
// Battery power (mW -> W, positive = discharging).
|
||||||
// Matches: "BatteryPower"=12345
|
|
||||||
if _, after, found := strings.Cut(line, "\"BatteryPower\"="); found {
|
if _, after, found := strings.Cut(line, "\"BatteryPower\"="); found {
|
||||||
valStr := strings.TrimSpace(after)
|
valStr := strings.TrimSpace(after)
|
||||||
valStr, _, _ = strings.Cut(valStr, ",")
|
valStr, _, _ = strings.Cut(valStr, ",")
|
||||||
@@ -249,14 +241,13 @@ func collectThermal() ThermalStatus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback: Try thermal level as a proxy if temperature not found
|
// Fallback: thermal level proxy.
|
||||||
if thermal.CPUTemp == 0 {
|
if thermal.CPUTemp == 0 {
|
||||||
ctx2, cancel2 := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
ctx2, cancel2 := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||||
defer cancel2()
|
defer cancel2()
|
||||||
out2, err := runCmd(ctx2, "sysctl", "-n", "machdep.xcpm.cpu_thermal_level")
|
out2, err := runCmd(ctx2, "sysctl", "-n", "machdep.xcpm.cpu_thermal_level")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
level, _ := strconv.Atoi(strings.TrimSpace(out2))
|
level, _ := strconv.Atoi(strings.TrimSpace(out2))
|
||||||
// Estimate temp: level 0-100 roughly maps to 40-100°C
|
|
||||||
if level >= 0 {
|
if level >= 0 {
|
||||||
thermal.CPUTemp = 45 + float64(level)*0.5
|
thermal.CPUTemp = 45 + float64(level)*0.5
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -80,7 +80,7 @@ func parseSPBluetooth(raw string) []BluetoothDevice {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if !strings.HasPrefix(line, " ") && strings.HasSuffix(trim, ":") {
|
if !strings.HasPrefix(line, " ") && strings.HasSuffix(trim, ":") {
|
||||||
// Reset at top-level sections
|
// Reset at top-level sections.
|
||||||
currentName = ""
|
currentName = ""
|
||||||
connected = false
|
connected = false
|
||||||
battery = ""
|
battery = ""
|
||||||
|
|||||||
@@ -31,12 +31,9 @@ func collectCPU() (CPUStatus, error) {
|
|||||||
logical = 1
|
logical = 1
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use two-call pattern for more reliable CPU measurements
|
// Two-call pattern for more reliable CPU usage.
|
||||||
// First call: initialize/store current CPU times
|
|
||||||
cpu.Percent(0, true)
|
cpu.Percent(0, true)
|
||||||
// Wait for sampling interval
|
|
||||||
time.Sleep(cpuSampleInterval)
|
time.Sleep(cpuSampleInterval)
|
||||||
// Second call: get actual percentages based on difference
|
|
||||||
percents, err := cpu.Percent(0, true)
|
percents, err := cpu.Percent(0, true)
|
||||||
var totalPercent float64
|
var totalPercent float64
|
||||||
perCoreEstimated := false
|
perCoreEstimated := false
|
||||||
@@ -69,7 +66,7 @@ func collectCPU() (CPUStatus, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get P-core and E-core counts for Apple Silicon
|
// P/E core counts for Apple Silicon.
|
||||||
pCores, eCores := getCoreTopology()
|
pCores, eCores := getCoreTopology()
|
||||||
|
|
||||||
return CPUStatus{
|
return CPUStatus{
|
||||||
@@ -91,14 +88,13 @@ func isZeroLoad(avg load.AvgStat) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
// Package-level cache for core topology
|
// Cache for core topology.
|
||||||
lastTopologyAt time.Time
|
lastTopologyAt time.Time
|
||||||
cachedP, cachedE int
|
cachedP, cachedE int
|
||||||
topologyTTL = 10 * time.Minute
|
topologyTTL = 10 * time.Minute
|
||||||
)
|
)
|
||||||
|
|
||||||
// getCoreTopology returns P-core and E-core counts on Apple Silicon.
|
// getCoreTopology returns P/E core counts on Apple Silicon.
|
||||||
// Returns (0, 0) on non-Apple Silicon or if detection fails.
|
|
||||||
func getCoreTopology() (pCores, eCores int) {
|
func getCoreTopology() (pCores, eCores int) {
|
||||||
if runtime.GOOS != "darwin" {
|
if runtime.GOOS != "darwin" {
|
||||||
return 0, 0
|
return 0, 0
|
||||||
@@ -114,7 +110,6 @@ func getCoreTopology() (pCores, eCores int) {
|
|||||||
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
// Get performance level info from sysctl
|
|
||||||
out, err := runCmd(ctx, "sysctl", "-n",
|
out, err := runCmd(ctx, "sysctl", "-n",
|
||||||
"hw.perflevel0.logicalcpu",
|
"hw.perflevel0.logicalcpu",
|
||||||
"hw.perflevel0.name",
|
"hw.perflevel0.name",
|
||||||
@@ -129,15 +124,12 @@ func getCoreTopology() (pCores, eCores int) {
|
|||||||
return 0, 0
|
return 0, 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse perflevel0
|
|
||||||
level0Count, _ := strconv.Atoi(strings.TrimSpace(lines[0]))
|
level0Count, _ := strconv.Atoi(strings.TrimSpace(lines[0]))
|
||||||
level0Name := strings.ToLower(strings.TrimSpace(lines[1]))
|
level0Name := strings.ToLower(strings.TrimSpace(lines[1]))
|
||||||
|
|
||||||
// Parse perflevel1
|
|
||||||
level1Count, _ := strconv.Atoi(strings.TrimSpace(lines[2]))
|
level1Count, _ := strconv.Atoi(strings.TrimSpace(lines[2]))
|
||||||
level1Name := strings.ToLower(strings.TrimSpace(lines[3]))
|
level1Name := strings.ToLower(strings.TrimSpace(lines[3]))
|
||||||
|
|
||||||
// Assign based on name (Performance vs Efficiency)
|
|
||||||
if strings.Contains(level0Name, "performance") {
|
if strings.Contains(level0Name, "performance") {
|
||||||
pCores = level0Count
|
pCores = level0Count
|
||||||
} else if strings.Contains(level0Name, "efficiency") {
|
} else if strings.Contains(level0Name, "efficiency") {
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ func collectDisks() ([]DiskStatus, error) {
|
|||||||
if strings.HasPrefix(part.Mountpoint, "/System/Volumes/") {
|
if strings.HasPrefix(part.Mountpoint, "/System/Volumes/") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
// Skip private volumes
|
// Skip /private mounts.
|
||||||
if strings.HasPrefix(part.Mountpoint, "/private/") {
|
if strings.HasPrefix(part.Mountpoint, "/private/") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -58,12 +58,11 @@ func collectDisks() ([]DiskStatus, error) {
|
|||||||
if err != nil || usage.Total == 0 {
|
if err != nil || usage.Total == 0 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
// Skip small volumes (< 1GB)
|
// Skip <1GB volumes.
|
||||||
if usage.Total < 1<<30 {
|
if usage.Total < 1<<30 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
// For APFS volumes, use a more precise dedup key (bytes level)
|
// Use size-based dedupe key for shared pools.
|
||||||
// to handle shared storage pools properly
|
|
||||||
volKey := fmt.Sprintf("%s:%d", part.Fstype, usage.Total)
|
volKey := fmt.Sprintf("%s:%d", part.Fstype, usage.Total)
|
||||||
if seenVolume[volKey] {
|
if seenVolume[volKey] {
|
||||||
continue
|
continue
|
||||||
@@ -94,7 +93,7 @@ func collectDisks() ([]DiskStatus, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
// Package-level cache for external disk status
|
// External disk cache.
|
||||||
lastDiskCacheAt time.Time
|
lastDiskCacheAt time.Time
|
||||||
diskTypeCache = make(map[string]bool)
|
diskTypeCache = make(map[string]bool)
|
||||||
diskCacheTTL = 2 * time.Minute
|
diskCacheTTL = 2 * time.Minute
|
||||||
@@ -106,7 +105,7 @@ func annotateDiskTypes(disks []DiskStatus) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
// Clear cache if stale
|
// Clear stale cache.
|
||||||
if now.Sub(lastDiskCacheAt) > diskCacheTTL {
|
if now.Sub(lastDiskCacheAt) > diskCacheTTL {
|
||||||
diskTypeCache = make(map[string]bool)
|
diskTypeCache = make(map[string]bool)
|
||||||
lastDiskCacheAt = now
|
lastDiskCacheAt = now
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ const (
|
|||||||
powermetricsTimeout = 2 * time.Second
|
powermetricsTimeout = 2 * time.Second
|
||||||
)
|
)
|
||||||
|
|
||||||
// Pre-compiled regex patterns for GPU usage parsing
|
// Regex for GPU usage parsing.
|
||||||
var (
|
var (
|
||||||
gpuActiveResidencyRe = regexp.MustCompile(`GPU HW active residency:\s+([\d.]+)%`)
|
gpuActiveResidencyRe = regexp.MustCompile(`GPU HW active residency:\s+([\d.]+)%`)
|
||||||
gpuIdleResidencyRe = regexp.MustCompile(`GPU idle residency:\s+([\d.]+)%`)
|
gpuIdleResidencyRe = regexp.MustCompile(`GPU idle residency:\s+([\d.]+)%`)
|
||||||
@@ -25,7 +25,7 @@ var (
|
|||||||
|
|
||||||
func (c *Collector) collectGPU(now time.Time) ([]GPUStatus, error) {
|
func (c *Collector) collectGPU(now time.Time) ([]GPUStatus, error) {
|
||||||
if runtime.GOOS == "darwin" {
|
if runtime.GOOS == "darwin" {
|
||||||
// Get static GPU info (cached for 10 min)
|
// Static GPU info (cached 10 min).
|
||||||
if len(c.cachedGPU) == 0 || c.lastGPUAt.IsZero() || now.Sub(c.lastGPUAt) >= macGPUInfoTTL {
|
if len(c.cachedGPU) == 0 || c.lastGPUAt.IsZero() || now.Sub(c.lastGPUAt) >= macGPUInfoTTL {
|
||||||
if gpus, err := readMacGPUInfo(); err == nil && len(gpus) > 0 {
|
if gpus, err := readMacGPUInfo(); err == nil && len(gpus) > 0 {
|
||||||
c.cachedGPU = gpus
|
c.cachedGPU = gpus
|
||||||
@@ -33,12 +33,12 @@ func (c *Collector) collectGPU(now time.Time) ([]GPUStatus, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get real-time GPU usage
|
// Real-time GPU usage.
|
||||||
if len(c.cachedGPU) > 0 {
|
if len(c.cachedGPU) > 0 {
|
||||||
usage := getMacGPUUsage()
|
usage := getMacGPUUsage()
|
||||||
result := make([]GPUStatus, len(c.cachedGPU))
|
result := make([]GPUStatus, len(c.cachedGPU))
|
||||||
copy(result, c.cachedGPU)
|
copy(result, c.cachedGPU)
|
||||||
// Apply usage to first GPU (Apple Silicon has one integrated GPU)
|
// Apply usage to first GPU (Apple Silicon).
|
||||||
if len(result) > 0 {
|
if len(result) > 0 {
|
||||||
result[0].Usage = usage
|
result[0].Usage = usage
|
||||||
}
|
}
|
||||||
@@ -152,19 +152,18 @@ func readMacGPUInfo() ([]GPUStatus, error) {
|
|||||||
return gpus, nil
|
return gpus, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// getMacGPUUsage gets GPU active residency from powermetrics.
|
// getMacGPUUsage reads GPU active residency from powermetrics.
|
||||||
// Returns -1 if unavailable (e.g., not running as root).
|
|
||||||
func getMacGPUUsage() float64 {
|
func getMacGPUUsage() float64 {
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), powermetricsTimeout)
|
ctx, cancel := context.WithTimeout(context.Background(), powermetricsTimeout)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
// powermetrics requires root, but we try anyway - some systems may have it enabled
|
// powermetrics may require root.
|
||||||
out, err := runCmd(ctx, "powermetrics", "--samplers", "gpu_power", "-i", "500", "-n", "1")
|
out, err := runCmd(ctx, "powermetrics", "--samplers", "gpu_power", "-i", "500", "-n", "1")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return -1
|
return -1
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse "GPU HW active residency: X.XX%"
|
// Parse "GPU HW active residency: X.XX%".
|
||||||
matches := gpuActiveResidencyRe.FindStringSubmatch(out)
|
matches := gpuActiveResidencyRe.FindStringSubmatch(out)
|
||||||
if len(matches) >= 2 {
|
if len(matches) >= 2 {
|
||||||
usage, err := strconv.ParseFloat(matches[1], 64)
|
usage, err := strconv.ParseFloat(matches[1], 64)
|
||||||
@@ -173,7 +172,7 @@ func getMacGPUUsage() float64 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback: parse "GPU idle residency: X.XX%" and calculate active
|
// Fallback: parse idle residency and derive active.
|
||||||
matchesIdle := gpuIdleResidencyRe.FindStringSubmatch(out)
|
matchesIdle := gpuIdleResidencyRe.FindStringSubmatch(out)
|
||||||
if len(matchesIdle) >= 2 {
|
if len(matchesIdle) >= 2 {
|
||||||
idle, err := strconv.ParseFloat(matchesIdle[1], 64)
|
idle, err := strconv.ParseFloat(matchesIdle[1], 64)
|
||||||
|
|||||||
@@ -18,19 +18,18 @@ func collectHardware(totalRAM uint64, disks []DiskStatus) HardwareInfo {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get model and CPU from system_profiler
|
// Model and CPU from system_profiler.
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
var model, cpuModel, osVersion string
|
var model, cpuModel, osVersion string
|
||||||
|
|
||||||
// Get hardware overview
|
|
||||||
out, err := runCmd(ctx, "system_profiler", "SPHardwareDataType")
|
out, err := runCmd(ctx, "system_profiler", "SPHardwareDataType")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
lines := strings.Split(out, "\n")
|
lines := strings.Split(out, "\n")
|
||||||
for _, line := range lines {
|
for _, line := range lines {
|
||||||
lower := strings.ToLower(strings.TrimSpace(line))
|
lower := strings.ToLower(strings.TrimSpace(line))
|
||||||
// Prefer "Model Name" over "Model Identifier"
|
// Prefer "Model Name" over "Model Identifier".
|
||||||
if strings.Contains(lower, "model name:") {
|
if strings.Contains(lower, "model name:") {
|
||||||
parts := strings.Split(line, ":")
|
parts := strings.Split(line, ":")
|
||||||
if len(parts) == 2 {
|
if len(parts) == 2 {
|
||||||
@@ -52,7 +51,6 @@ func collectHardware(totalRAM uint64, disks []DiskStatus) HardwareInfo {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get macOS version
|
|
||||||
ctx2, cancel2 := context.WithTimeout(context.Background(), 1*time.Second)
|
ctx2, cancel2 := context.WithTimeout(context.Background(), 1*time.Second)
|
||||||
defer cancel2()
|
defer cancel2()
|
||||||
out2, err := runCmd(ctx2, "sw_vers", "-productVersion")
|
out2, err := runCmd(ctx2, "sw_vers", "-productVersion")
|
||||||
@@ -60,7 +58,6 @@ func collectHardware(totalRAM uint64, disks []DiskStatus) HardwareInfo {
|
|||||||
osVersion = "macOS " + strings.TrimSpace(out2)
|
osVersion = "macOS " + strings.TrimSpace(out2)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get disk size
|
|
||||||
diskSize := "Unknown"
|
diskSize := "Unknown"
|
||||||
if len(disks) > 0 {
|
if len(disks) > 0 {
|
||||||
diskSize = humanBytes(disks[0].Total)
|
diskSize = humanBytes(disks[0].Total)
|
||||||
|
|||||||
@@ -5,45 +5,43 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Health score calculation weights and thresholds
|
// Health score weights and thresholds.
|
||||||
const (
|
const (
|
||||||
// Weights (must sum to ~100 for total score)
|
// Weights.
|
||||||
healthCPUWeight = 30.0
|
healthCPUWeight = 30.0
|
||||||
healthMemWeight = 25.0
|
healthMemWeight = 25.0
|
||||||
healthDiskWeight = 20.0
|
healthDiskWeight = 20.0
|
||||||
healthThermalWeight = 15.0
|
healthThermalWeight = 15.0
|
||||||
healthIOWeight = 10.0
|
healthIOWeight = 10.0
|
||||||
|
|
||||||
// CPU thresholds
|
// CPU.
|
||||||
cpuNormalThreshold = 30.0
|
cpuNormalThreshold = 30.0
|
||||||
cpuHighThreshold = 70.0
|
cpuHighThreshold = 70.0
|
||||||
|
|
||||||
// Memory thresholds
|
// Memory.
|
||||||
memNormalThreshold = 50.0
|
memNormalThreshold = 50.0
|
||||||
memHighThreshold = 80.0
|
memHighThreshold = 80.0
|
||||||
memPressureWarnPenalty = 5.0
|
memPressureWarnPenalty = 5.0
|
||||||
memPressureCritPenalty = 15.0
|
memPressureCritPenalty = 15.0
|
||||||
|
|
||||||
// Disk thresholds
|
// Disk.
|
||||||
diskWarnThreshold = 70.0
|
diskWarnThreshold = 70.0
|
||||||
diskCritThreshold = 90.0
|
diskCritThreshold = 90.0
|
||||||
|
|
||||||
// Thermal thresholds
|
// Thermal.
|
||||||
thermalNormalThreshold = 60.0
|
thermalNormalThreshold = 60.0
|
||||||
thermalHighThreshold = 85.0
|
thermalHighThreshold = 85.0
|
||||||
|
|
||||||
// Disk IO thresholds (MB/s)
|
// Disk IO (MB/s).
|
||||||
ioNormalThreshold = 50.0
|
ioNormalThreshold = 50.0
|
||||||
ioHighThreshold = 150.0
|
ioHighThreshold = 150.0
|
||||||
)
|
)
|
||||||
|
|
||||||
func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, diskIO DiskIOStatus, thermal ThermalStatus) (int, string) {
|
func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, diskIO DiskIOStatus, thermal ThermalStatus) (int, string) {
|
||||||
// Start with perfect score
|
|
||||||
score := 100.0
|
score := 100.0
|
||||||
issues := []string{}
|
issues := []string{}
|
||||||
|
|
||||||
// CPU Usage (30% weight) - deduct up to 30 points
|
// CPU penalty.
|
||||||
// 0-30% CPU = 0 deduction, 30-70% = linear, 70-100% = heavy penalty
|
|
||||||
cpuPenalty := 0.0
|
cpuPenalty := 0.0
|
||||||
if cpu.Usage > cpuNormalThreshold {
|
if cpu.Usage > cpuNormalThreshold {
|
||||||
if cpu.Usage > cpuHighThreshold {
|
if cpu.Usage > cpuHighThreshold {
|
||||||
@@ -57,8 +55,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
issues = append(issues, "High CPU")
|
issues = append(issues, "High CPU")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Memory Usage (25% weight) - deduct up to 25 points
|
// Memory penalty.
|
||||||
// 0-50% = 0 deduction, 50-80% = linear, 80-100% = heavy penalty
|
|
||||||
memPenalty := 0.0
|
memPenalty := 0.0
|
||||||
if mem.UsedPercent > memNormalThreshold {
|
if mem.UsedPercent > memNormalThreshold {
|
||||||
if mem.UsedPercent > memHighThreshold {
|
if mem.UsedPercent > memHighThreshold {
|
||||||
@@ -72,7 +69,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
issues = append(issues, "High Memory")
|
issues = append(issues, "High Memory")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Memory Pressure (extra penalty)
|
// Memory pressure penalty.
|
||||||
if mem.Pressure == "warn" {
|
if mem.Pressure == "warn" {
|
||||||
score -= memPressureWarnPenalty
|
score -= memPressureWarnPenalty
|
||||||
issues = append(issues, "Memory Pressure")
|
issues = append(issues, "Memory Pressure")
|
||||||
@@ -81,7 +78,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
issues = append(issues, "Critical Memory")
|
issues = append(issues, "Critical Memory")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Disk Usage (20% weight) - deduct up to 20 points
|
// Disk penalty.
|
||||||
diskPenalty := 0.0
|
diskPenalty := 0.0
|
||||||
if len(disks) > 0 {
|
if len(disks) > 0 {
|
||||||
diskUsage := disks[0].UsedPercent
|
diskUsage := disks[0].UsedPercent
|
||||||
@@ -98,7 +95,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Thermal (15% weight) - deduct up to 15 points
|
// Thermal penalty.
|
||||||
thermalPenalty := 0.0
|
thermalPenalty := 0.0
|
||||||
if thermal.CPUTemp > 0 {
|
if thermal.CPUTemp > 0 {
|
||||||
if thermal.CPUTemp > thermalNormalThreshold {
|
if thermal.CPUTemp > thermalNormalThreshold {
|
||||||
@@ -112,7 +109,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
score -= thermalPenalty
|
score -= thermalPenalty
|
||||||
}
|
}
|
||||||
|
|
||||||
// Disk IO (10% weight) - deduct up to 10 points
|
// Disk IO penalty.
|
||||||
ioPenalty := 0.0
|
ioPenalty := 0.0
|
||||||
totalIO := diskIO.ReadRate + diskIO.WriteRate
|
totalIO := diskIO.ReadRate + diskIO.WriteRate
|
||||||
if totalIO > ioNormalThreshold {
|
if totalIO > ioNormalThreshold {
|
||||||
@@ -125,7 +122,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
}
|
}
|
||||||
score -= ioPenalty
|
score -= ioPenalty
|
||||||
|
|
||||||
// Ensure score is in valid range
|
// Clamp score.
|
||||||
if score < 0 {
|
if score < 0 {
|
||||||
score = 0
|
score = 0
|
||||||
}
|
}
|
||||||
@@ -133,7 +130,7 @@ func calculateHealthScore(cpu CPUStatus, mem MemoryStatus, disks []DiskStatus, d
|
|||||||
score = 100
|
score = 100
|
||||||
}
|
}
|
||||||
|
|
||||||
// Generate message
|
// Build message.
|
||||||
msg := "Excellent"
|
msg := "Excellent"
|
||||||
if score >= 90 {
|
if score >= 90 {
|
||||||
msg = "Excellent"
|
msg = "Excellent"
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ func (c *Collector) collectNetwork(now time.Time) ([]NetworkStatus, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get IP addresses for interfaces
|
// Map interface IPs.
|
||||||
ifAddrs := getInterfaceIPs()
|
ifAddrs := getInterfaceIPs()
|
||||||
|
|
||||||
if c.lastNetAt.IsZero() {
|
if c.lastNetAt.IsZero() {
|
||||||
@@ -81,7 +81,7 @@ func getInterfaceIPs() map[string]string {
|
|||||||
}
|
}
|
||||||
for _, iface := range ifaces {
|
for _, iface := range ifaces {
|
||||||
for _, addr := range iface.Addrs {
|
for _, addr := range iface.Addrs {
|
||||||
// Only IPv4
|
// IPv4 only.
|
||||||
if strings.Contains(addr.Addr, ".") && !strings.HasPrefix(addr.Addr, "127.") {
|
if strings.Contains(addr.Addr, ".") && !strings.HasPrefix(addr.Addr, "127.") {
|
||||||
ip := strings.Split(addr.Addr, "/")[0]
|
ip := strings.Split(addr.Addr, "/")[0]
|
||||||
result[iface.Name] = ip
|
result[iface.Name] = ip
|
||||||
@@ -104,14 +104,14 @@ func isNoiseInterface(name string) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func collectProxy() ProxyStatus {
|
func collectProxy() ProxyStatus {
|
||||||
// Check environment variables first
|
// Check environment variables first.
|
||||||
for _, env := range []string{"https_proxy", "HTTPS_PROXY", "http_proxy", "HTTP_PROXY"} {
|
for _, env := range []string{"https_proxy", "HTTPS_PROXY", "http_proxy", "HTTP_PROXY"} {
|
||||||
if val := os.Getenv(env); val != "" {
|
if val := os.Getenv(env); val != "" {
|
||||||
proxyType := "HTTP"
|
proxyType := "HTTP"
|
||||||
if strings.HasPrefix(val, "socks") {
|
if strings.HasPrefix(val, "socks") {
|
||||||
proxyType = "SOCKS"
|
proxyType = "SOCKS"
|
||||||
}
|
}
|
||||||
// Extract host
|
// Extract host.
|
||||||
host := val
|
host := val
|
||||||
if strings.Contains(host, "://") {
|
if strings.Contains(host, "://") {
|
||||||
host = strings.SplitN(host, "://", 2)[1]
|
host = strings.SplitN(host, "://", 2)[1]
|
||||||
@@ -123,7 +123,7 @@ func collectProxy() ProxyStatus {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// macOS: check system proxy via scutil
|
// macOS: check system proxy via scutil.
|
||||||
if runtime.GOOS == "darwin" {
|
if runtime.GOOS == "darwin" {
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ func collectTopProcesses() []ProcessInfo {
|
|||||||
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
// Use ps to get top processes by CPU
|
// Use ps to get top processes by CPU.
|
||||||
out, err := runCmd(ctx, "ps", "-Aceo", "pcpu,pmem,comm", "-r")
|
out, err := runCmd(ctx, "ps", "-Aceo", "pcpu,pmem,comm", "-r")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil
|
return nil
|
||||||
@@ -24,10 +24,10 @@ func collectTopProcesses() []ProcessInfo {
|
|||||||
lines := strings.Split(strings.TrimSpace(out), "\n")
|
lines := strings.Split(strings.TrimSpace(out), "\n")
|
||||||
var procs []ProcessInfo
|
var procs []ProcessInfo
|
||||||
for i, line := range lines {
|
for i, line := range lines {
|
||||||
if i == 0 { // skip header
|
if i == 0 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if i > 5 { // top 5
|
if i > 5 {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
fields := strings.Fields(line)
|
fields := strings.Fields(line)
|
||||||
@@ -37,7 +37,7 @@ func collectTopProcesses() []ProcessInfo {
|
|||||||
cpuVal, _ := strconv.ParseFloat(fields[0], 64)
|
cpuVal, _ := strconv.ParseFloat(fields[0], 64)
|
||||||
memVal, _ := strconv.ParseFloat(fields[1], 64)
|
memVal, _ := strconv.ParseFloat(fields[1], 64)
|
||||||
name := fields[len(fields)-1]
|
name := fields[len(fields)-1]
|
||||||
// Get just the process name without path
|
// Strip path from command name.
|
||||||
if idx := strings.LastIndex(name, "/"); idx >= 0 {
|
if idx := strings.LastIndex(name, "/"); idx >= 0 {
|
||||||
name = name[idx+1:]
|
name = name[idx+1:]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,7 +33,7 @@ const (
|
|||||||
iconProcs = "❊"
|
iconProcs = "❊"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Check if it's Christmas season (Dec 10-31)
|
// isChristmasSeason reports Dec 10-31.
|
||||||
func isChristmasSeason() bool {
|
func isChristmasSeason() bool {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
month := now.Month()
|
month := now.Month()
|
||||||
@@ -41,7 +41,7 @@ func isChristmasSeason() bool {
|
|||||||
return month == time.December && day >= 10 && day <= 31
|
return month == time.December && day >= 10 && day <= 31
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mole body frames (legs animate)
|
// Mole body frames.
|
||||||
var moleBody = [][]string{
|
var moleBody = [][]string{
|
||||||
{
|
{
|
||||||
` /\_/\`,
|
` /\_/\`,
|
||||||
@@ -69,7 +69,7 @@ var moleBody = [][]string{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mole body frames with Christmas hat
|
// Mole body frames with Christmas hat.
|
||||||
var moleBodyWithHat = [][]string{
|
var moleBodyWithHat = [][]string{
|
||||||
{
|
{
|
||||||
` *`,
|
` *`,
|
||||||
@@ -105,7 +105,7 @@ var moleBodyWithHat = [][]string{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Generate frames with horizontal movement
|
// getMoleFrame renders the animated mole.
|
||||||
func getMoleFrame(animFrame int, termWidth int) string {
|
func getMoleFrame(animFrame int, termWidth int) string {
|
||||||
var body []string
|
var body []string
|
||||||
var bodyIdx int
|
var bodyIdx int
|
||||||
@@ -119,15 +119,12 @@ func getMoleFrame(animFrame int, termWidth int) string {
|
|||||||
body = moleBody[bodyIdx]
|
body = moleBody[bodyIdx]
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate mole width (approximate)
|
|
||||||
moleWidth := 15
|
moleWidth := 15
|
||||||
// Move across terminal width
|
|
||||||
maxPos := termWidth - moleWidth
|
maxPos := termWidth - moleWidth
|
||||||
if maxPos < 0 {
|
if maxPos < 0 {
|
||||||
maxPos = 0
|
maxPos = 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// Move position: 0 -> maxPos -> 0
|
|
||||||
cycleLength := maxPos * 2
|
cycleLength := maxPos * 2
|
||||||
if cycleLength == 0 {
|
if cycleLength == 0 {
|
||||||
cycleLength = 1
|
cycleLength = 1
|
||||||
@@ -141,7 +138,6 @@ func getMoleFrame(animFrame int, termWidth int) string {
|
|||||||
var lines []string
|
var lines []string
|
||||||
|
|
||||||
if isChristmas {
|
if isChristmas {
|
||||||
// Render with red hat on first 3 lines
|
|
||||||
for i, line := range body {
|
for i, line := range body {
|
||||||
if i < 3 {
|
if i < 3 {
|
||||||
lines = append(lines, padding+hatStyle.Render(line))
|
lines = append(lines, padding+hatStyle.Render(line))
|
||||||
@@ -165,27 +161,24 @@ type cardData struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int) string {
|
func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int) string {
|
||||||
// Title
|
|
||||||
title := titleStyle.Render("Mole Status")
|
title := titleStyle.Render("Mole Status")
|
||||||
|
|
||||||
// Health Score
|
|
||||||
scoreStyle := getScoreStyle(m.HealthScore)
|
scoreStyle := getScoreStyle(m.HealthScore)
|
||||||
scoreText := subtleStyle.Render("Health ") + scoreStyle.Render(fmt.Sprintf("● %d", m.HealthScore))
|
scoreText := subtleStyle.Render("Health ") + scoreStyle.Render(fmt.Sprintf("● %d", m.HealthScore))
|
||||||
|
|
||||||
// Hardware info - compact for single line
|
// Hardware info for a single line.
|
||||||
infoParts := []string{}
|
infoParts := []string{}
|
||||||
if m.Hardware.Model != "" {
|
if m.Hardware.Model != "" {
|
||||||
infoParts = append(infoParts, primaryStyle.Render(m.Hardware.Model))
|
infoParts = append(infoParts, primaryStyle.Render(m.Hardware.Model))
|
||||||
}
|
}
|
||||||
if m.Hardware.CPUModel != "" {
|
if m.Hardware.CPUModel != "" {
|
||||||
cpuInfo := m.Hardware.CPUModel
|
cpuInfo := m.Hardware.CPUModel
|
||||||
// Add GPU core count if available (compact format)
|
// Append GPU core count when available.
|
||||||
if len(m.GPU) > 0 && m.GPU[0].CoreCount > 0 {
|
if len(m.GPU) > 0 && m.GPU[0].CoreCount > 0 {
|
||||||
cpuInfo += fmt.Sprintf(" (%dGPU)", m.GPU[0].CoreCount)
|
cpuInfo += fmt.Sprintf(" (%dGPU)", m.GPU[0].CoreCount)
|
||||||
}
|
}
|
||||||
infoParts = append(infoParts, cpuInfo)
|
infoParts = append(infoParts, cpuInfo)
|
||||||
}
|
}
|
||||||
// Combine RAM and Disk to save space
|
|
||||||
var specs []string
|
var specs []string
|
||||||
if m.Hardware.TotalRAM != "" {
|
if m.Hardware.TotalRAM != "" {
|
||||||
specs = append(specs, m.Hardware.TotalRAM)
|
specs = append(specs, m.Hardware.TotalRAM)
|
||||||
@@ -200,10 +193,8 @@ func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int
|
|||||||
infoParts = append(infoParts, m.Hardware.OSVersion)
|
infoParts = append(infoParts, m.Hardware.OSVersion)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Single line compact header
|
|
||||||
headerLine := title + " " + scoreText + " " + subtleStyle.Render(strings.Join(infoParts, " · "))
|
headerLine := title + " " + scoreText + " " + subtleStyle.Render(strings.Join(infoParts, " · "))
|
||||||
|
|
||||||
// Running mole animation
|
|
||||||
mole := getMoleFrame(animFrame, termWidth)
|
mole := getMoleFrame(animFrame, termWidth)
|
||||||
|
|
||||||
if errMsg != "" {
|
if errMsg != "" {
|
||||||
@@ -214,19 +205,14 @@ func renderHeader(m MetricsSnapshot, errMsg string, animFrame int, termWidth int
|
|||||||
|
|
||||||
func getScoreStyle(score int) lipgloss.Style {
|
func getScoreStyle(score int) lipgloss.Style {
|
||||||
if score >= 90 {
|
if score >= 90 {
|
||||||
// Excellent - Bright Green
|
|
||||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#87FF87")).Bold(true)
|
return lipgloss.NewStyle().Foreground(lipgloss.Color("#87FF87")).Bold(true)
|
||||||
} else if score >= 75 {
|
} else if score >= 75 {
|
||||||
// Good - Green
|
|
||||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#87D787")).Bold(true)
|
return lipgloss.NewStyle().Foreground(lipgloss.Color("#87D787")).Bold(true)
|
||||||
} else if score >= 60 {
|
} else if score >= 60 {
|
||||||
// Fair - Yellow
|
|
||||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FFD75F")).Bold(true)
|
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FFD75F")).Bold(true)
|
||||||
} else if score >= 40 {
|
} else if score >= 40 {
|
||||||
// Poor - Orange
|
|
||||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FFAF5F")).Bold(true)
|
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FFAF5F")).Bold(true)
|
||||||
} else {
|
} else {
|
||||||
// Critical - Red
|
|
||||||
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FF6B6B")).Bold(true)
|
return lipgloss.NewStyle().Foreground(lipgloss.Color("#FF6B6B")).Bold(true)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -240,7 +226,6 @@ func buildCards(m MetricsSnapshot, _ int) []cardData {
|
|||||||
renderProcessCard(m.TopProcesses),
|
renderProcessCard(m.TopProcesses),
|
||||||
renderNetworkCard(m.Network, m.Proxy),
|
renderNetworkCard(m.Network, m.Proxy),
|
||||||
}
|
}
|
||||||
// Only show sensors if we have valid temperature readings
|
|
||||||
if hasSensorData(m.Sensors) {
|
if hasSensorData(m.Sensors) {
|
||||||
cards = append(cards, renderSensorsCard(m.Sensors))
|
cards = append(cards, renderSensorsCard(m.Sensors))
|
||||||
}
|
}
|
||||||
@@ -334,7 +319,7 @@ func renderMemoryCard(mem MemoryStatus) cardData {
|
|||||||
} else {
|
} else {
|
||||||
lines = append(lines, fmt.Sprintf("Swap %s", subtleStyle.Render("not in use")))
|
lines = append(lines, fmt.Sprintf("Swap %s", subtleStyle.Render("not in use")))
|
||||||
}
|
}
|
||||||
// Memory pressure
|
// Memory pressure status.
|
||||||
if mem.Pressure != "" {
|
if mem.Pressure != "" {
|
||||||
pressureStyle := okStyle
|
pressureStyle := okStyle
|
||||||
pressureText := "Status " + mem.Pressure
|
pressureText := "Status " + mem.Pressure
|
||||||
@@ -405,7 +390,6 @@ func formatDiskLine(label string, d DiskStatus) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ioBar(rate float64) string {
|
func ioBar(rate float64) string {
|
||||||
// Scale: 0-50 MB/s maps to 0-5 blocks
|
|
||||||
filled := int(rate / 10.0)
|
filled := int(rate / 10.0)
|
||||||
if filled > 5 {
|
if filled > 5 {
|
||||||
filled = 5
|
filled = 5
|
||||||
@@ -441,7 +425,7 @@ func renderProcessCard(procs []ProcessInfo) cardData {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func miniBar(percent float64) string {
|
func miniBar(percent float64) string {
|
||||||
filled := int(percent / 20) // 5 chars max for 100%
|
filled := int(percent / 20)
|
||||||
if filled > 5 {
|
if filled > 5 {
|
||||||
filled = 5
|
filled = 5
|
||||||
}
|
}
|
||||||
@@ -471,7 +455,7 @@ func renderNetworkCard(netStats []NetworkStatus, proxy ProxyStatus) cardData {
|
|||||||
txBar := netBar(totalTx)
|
txBar := netBar(totalTx)
|
||||||
lines = append(lines, fmt.Sprintf("Down %s %s", rxBar, formatRate(totalRx)))
|
lines = append(lines, fmt.Sprintf("Down %s %s", rxBar, formatRate(totalRx)))
|
||||||
lines = append(lines, fmt.Sprintf("Up %s %s", txBar, formatRate(totalTx)))
|
lines = append(lines, fmt.Sprintf("Up %s %s", txBar, formatRate(totalTx)))
|
||||||
// Show proxy and IP in one line
|
// Show proxy and IP on one line.
|
||||||
var infoParts []string
|
var infoParts []string
|
||||||
if proxy.Enabled {
|
if proxy.Enabled {
|
||||||
infoParts = append(infoParts, "Proxy "+proxy.Type)
|
infoParts = append(infoParts, "Proxy "+proxy.Type)
|
||||||
@@ -487,7 +471,6 @@ func renderNetworkCard(netStats []NetworkStatus, proxy ProxyStatus) cardData {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func netBar(rate float64) string {
|
func netBar(rate float64) string {
|
||||||
// Scale: 0-10 MB/s maps to 0-5 blocks
|
|
||||||
filled := int(rate / 2.0)
|
filled := int(rate / 2.0)
|
||||||
if filled > 5 {
|
if filled > 5 {
|
||||||
filled = 5
|
filled = 5
|
||||||
@@ -511,8 +494,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
|||||||
lines = append(lines, subtleStyle.Render("No battery"))
|
lines = append(lines, subtleStyle.Render("No battery"))
|
||||||
} else {
|
} else {
|
||||||
b := batts[0]
|
b := batts[0]
|
||||||
// Line 1: label + bar + percentage (consistent with other cards)
|
|
||||||
// Only show red when battery is critically low
|
|
||||||
statusLower := strings.ToLower(b.Status)
|
statusLower := strings.ToLower(b.Status)
|
||||||
percentText := fmt.Sprintf("%5.1f%%", b.Percent)
|
percentText := fmt.Sprintf("%5.1f%%", b.Percent)
|
||||||
if b.Percent < 20 && statusLower != "charging" && statusLower != "charged" {
|
if b.Percent < 20 && statusLower != "charging" && statusLower != "charged" {
|
||||||
@@ -520,7 +501,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
|||||||
}
|
}
|
||||||
lines = append(lines, fmt.Sprintf("Level %s %s", batteryProgressBar(b.Percent), percentText))
|
lines = append(lines, fmt.Sprintf("Level %s %s", batteryProgressBar(b.Percent), percentText))
|
||||||
|
|
||||||
// Line 2: status with power info
|
|
||||||
statusIcon := ""
|
statusIcon := ""
|
||||||
statusStyle := subtleStyle
|
statusStyle := subtleStyle
|
||||||
if statusLower == "charging" || statusLower == "charged" {
|
if statusLower == "charging" || statusLower == "charged" {
|
||||||
@@ -529,7 +509,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
|||||||
} else if b.Percent < 20 {
|
} else if b.Percent < 20 {
|
||||||
statusStyle = dangerStyle
|
statusStyle = dangerStyle
|
||||||
}
|
}
|
||||||
// Capitalize first letter
|
|
||||||
statusText := b.Status
|
statusText := b.Status
|
||||||
if len(statusText) > 0 {
|
if len(statusText) > 0 {
|
||||||
statusText = strings.ToUpper(statusText[:1]) + strings.ToLower(statusText[1:])
|
statusText = strings.ToUpper(statusText[:1]) + strings.ToLower(statusText[1:])
|
||||||
@@ -537,21 +516,18 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
|||||||
if b.TimeLeft != "" {
|
if b.TimeLeft != "" {
|
||||||
statusText += " · " + b.TimeLeft
|
statusText += " · " + b.TimeLeft
|
||||||
}
|
}
|
||||||
// Add power information
|
// Add power info.
|
||||||
if statusLower == "charging" || statusLower == "charged" {
|
if statusLower == "charging" || statusLower == "charged" {
|
||||||
// AC powered - show system power consumption
|
|
||||||
if thermal.SystemPower > 0 {
|
if thermal.SystemPower > 0 {
|
||||||
statusText += fmt.Sprintf(" · %.0fW", thermal.SystemPower)
|
statusText += fmt.Sprintf(" · %.0fW", thermal.SystemPower)
|
||||||
} else if thermal.AdapterPower > 0 {
|
} else if thermal.AdapterPower > 0 {
|
||||||
statusText += fmt.Sprintf(" · %.0fW Adapter", thermal.AdapterPower)
|
statusText += fmt.Sprintf(" · %.0fW Adapter", thermal.AdapterPower)
|
||||||
}
|
}
|
||||||
} else if thermal.BatteryPower > 0 {
|
} else if thermal.BatteryPower > 0 {
|
||||||
// Battery powered - show discharge rate
|
|
||||||
statusText += fmt.Sprintf(" · %.0fW", thermal.BatteryPower)
|
statusText += fmt.Sprintf(" · %.0fW", thermal.BatteryPower)
|
||||||
}
|
}
|
||||||
lines = append(lines, statusStyle.Render(statusText+statusIcon))
|
lines = append(lines, statusStyle.Render(statusText+statusIcon))
|
||||||
|
|
||||||
// Line 3: Health + cycles + temp
|
|
||||||
healthParts := []string{}
|
healthParts := []string{}
|
||||||
if b.Health != "" {
|
if b.Health != "" {
|
||||||
healthParts = append(healthParts, b.Health)
|
healthParts = append(healthParts, b.Health)
|
||||||
@@ -560,7 +536,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
|||||||
healthParts = append(healthParts, fmt.Sprintf("%d cycles", b.CycleCount))
|
healthParts = append(healthParts, fmt.Sprintf("%d cycles", b.CycleCount))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add temperature if available
|
|
||||||
if thermal.CPUTemp > 0 {
|
if thermal.CPUTemp > 0 {
|
||||||
tempStyle := subtleStyle
|
tempStyle := subtleStyle
|
||||||
if thermal.CPUTemp > 80 {
|
if thermal.CPUTemp > 80 {
|
||||||
@@ -571,7 +546,6 @@ func renderBatteryCard(batts []BatteryStatus, thermal ThermalStatus) cardData {
|
|||||||
healthParts = append(healthParts, tempStyle.Render(fmt.Sprintf("%.0f°C", thermal.CPUTemp)))
|
healthParts = append(healthParts, tempStyle.Render(fmt.Sprintf("%.0f°C", thermal.CPUTemp)))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add fan speed if available
|
|
||||||
if thermal.FanSpeed > 0 {
|
if thermal.FanSpeed > 0 {
|
||||||
healthParts = append(healthParts, fmt.Sprintf("%d RPM", thermal.FanSpeed))
|
healthParts = append(healthParts, fmt.Sprintf("%d RPM", thermal.FanSpeed))
|
||||||
}
|
}
|
||||||
@@ -607,7 +581,6 @@ func renderCard(data cardData, width int, height int) string {
|
|||||||
header := titleStyle.Render(titleText) + " " + lineStyle.Render(strings.Repeat("╌", lineLen))
|
header := titleStyle.Render(titleText) + " " + lineStyle.Render(strings.Repeat("╌", lineLen))
|
||||||
content := header + "\n" + strings.Join(data.lines, "\n")
|
content := header + "\n" + strings.Join(data.lines, "\n")
|
||||||
|
|
||||||
// Pad to target height
|
|
||||||
lines := strings.Split(content, "\n")
|
lines := strings.Split(content, "\n")
|
||||||
for len(lines) < height {
|
for len(lines) < height {
|
||||||
lines = append(lines, "")
|
lines = append(lines, "")
|
||||||
@@ -780,7 +753,6 @@ func renderTwoColumns(cards []cardData, width int) string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add empty lines between rows for separation
|
|
||||||
var spacedRows []string
|
var spacedRows []string
|
||||||
for i, r := range rows {
|
for i, r := range rows {
|
||||||
if i > 0 {
|
if i > 0 {
|
||||||
|
|||||||
81
install.sh
81
install.sh
@@ -1,16 +1,16 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Mole Installation Script
|
# Mole - Installer for manual installs.
|
||||||
|
# Fetches source/binaries and installs to prefix.
|
||||||
|
# Supports update and edge installs.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# Colors
|
|
||||||
GREEN='\033[0;32m'
|
GREEN='\033[0;32m'
|
||||||
BLUE='\033[0;34m'
|
BLUE='\033[0;34m'
|
||||||
YELLOW='\033[1;33m'
|
YELLOW='\033[1;33m'
|
||||||
RED='\033[0;31m'
|
RED='\033[0;31m'
|
||||||
NC='\033[0m'
|
NC='\033[0m'
|
||||||
|
|
||||||
# Simple spinner
|
|
||||||
_SPINNER_PID=""
|
_SPINNER_PID=""
|
||||||
start_line_spinner() {
|
start_line_spinner() {
|
||||||
local msg="$1"
|
local msg="$1"
|
||||||
@@ -36,17 +36,15 @@ stop_line_spinner() { if [[ -n "$_SPINNER_PID" ]]; then
|
|||||||
printf "\r\033[K"
|
printf "\r\033[K"
|
||||||
fi; }
|
fi; }
|
||||||
|
|
||||||
# Verbosity (0 = quiet, 1 = verbose)
|
|
||||||
VERBOSE=1
|
VERBOSE=1
|
||||||
|
|
||||||
# Icons (duplicated from lib/core/common.sh - necessary as install.sh runs standalone)
|
# Icons duplicated from lib/core/common.sh (install.sh runs standalone).
|
||||||
# Note: Don't use 'readonly' here to avoid conflicts when sourcing common.sh later
|
# Avoid readonly to prevent conflicts when sourcing common.sh later.
|
||||||
ICON_SUCCESS="✓"
|
ICON_SUCCESS="✓"
|
||||||
ICON_ADMIN="●"
|
ICON_ADMIN="●"
|
||||||
ICON_CONFIRM="◎"
|
ICON_CONFIRM="◎"
|
||||||
ICON_ERROR="☻"
|
ICON_ERROR="☻"
|
||||||
|
|
||||||
# Logging functions
|
|
||||||
log_info() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}$1${NC}"; }
|
log_info() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}$1${NC}"; }
|
||||||
log_success() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${GREEN}${ICON_SUCCESS}${NC} $1"; }
|
log_success() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${GREEN}${ICON_SUCCESS}${NC} $1"; }
|
||||||
log_warning() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${YELLOW}$1${NC}"; }
|
log_warning() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${YELLOW}$1${NC}"; }
|
||||||
@@ -54,21 +52,18 @@ log_error() { echo -e "${YELLOW}${ICON_ERROR}${NC} $1"; }
|
|||||||
log_admin() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}${ICON_ADMIN}${NC} $1"; }
|
log_admin() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}${ICON_ADMIN}${NC} $1"; }
|
||||||
log_confirm() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}${ICON_CONFIRM}${NC} $1"; }
|
log_confirm() { [[ ${VERBOSE} -eq 1 ]] && echo -e "${BLUE}${ICON_CONFIRM}${NC} $1"; }
|
||||||
|
|
||||||
# Default installation directory
|
# Install defaults
|
||||||
INSTALL_DIR="/usr/local/bin"
|
INSTALL_DIR="/usr/local/bin"
|
||||||
CONFIG_DIR="$HOME/.config/mole"
|
CONFIG_DIR="$HOME/.config/mole"
|
||||||
SOURCE_DIR=""
|
SOURCE_DIR=""
|
||||||
|
|
||||||
# Default action (install|update)
|
|
||||||
ACTION="install"
|
ACTION="install"
|
||||||
|
|
||||||
# Check if we need sudo for install directory operations
|
# Resolve source dir (local checkout, env override, or download).
|
||||||
needs_sudo() {
|
needs_sudo() {
|
||||||
[[ ! -w "$INSTALL_DIR" ]]
|
[[ ! -w "$INSTALL_DIR" ]]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Execute command with sudo if needed
|
|
||||||
# Usage: maybe_sudo cp source dest
|
|
||||||
maybe_sudo() {
|
maybe_sudo() {
|
||||||
if needs_sudo; then
|
if needs_sudo; then
|
||||||
sudo "$@"
|
sudo "$@"
|
||||||
@@ -77,13 +72,11 @@ maybe_sudo() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Resolve the directory containing source files (supports curl | bash)
|
|
||||||
resolve_source_dir() {
|
resolve_source_dir() {
|
||||||
if [[ -n "$SOURCE_DIR" && -d "$SOURCE_DIR" && -f "$SOURCE_DIR/mole" ]]; then
|
if [[ -n "$SOURCE_DIR" && -d "$SOURCE_DIR" && -f "$SOURCE_DIR/mole" ]]; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# 1) If script is on disk, use its directory (only when mole executable present)
|
|
||||||
if [[ -n "${BASH_SOURCE[0]:-}" && -f "${BASH_SOURCE[0]}" ]]; then
|
if [[ -n "${BASH_SOURCE[0]:-}" && -f "${BASH_SOURCE[0]}" ]]; then
|
||||||
local script_dir
|
local script_dir
|
||||||
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
@@ -93,16 +86,13 @@ resolve_source_dir() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# 2) If CLEAN_SOURCE_DIR env is provided, honor it
|
|
||||||
if [[ -n "${CLEAN_SOURCE_DIR:-}" && -d "$CLEAN_SOURCE_DIR" && -f "$CLEAN_SOURCE_DIR/mole" ]]; then
|
if [[ -n "${CLEAN_SOURCE_DIR:-}" && -d "$CLEAN_SOURCE_DIR" && -f "$CLEAN_SOURCE_DIR/mole" ]]; then
|
||||||
SOURCE_DIR="$CLEAN_SOURCE_DIR"
|
SOURCE_DIR="$CLEAN_SOURCE_DIR"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# 3) Fallback: fetch repository to a temp directory (works for curl | bash)
|
|
||||||
local tmp
|
local tmp
|
||||||
tmp="$(mktemp -d)"
|
tmp="$(mktemp -d)"
|
||||||
# Expand tmp now so trap doesn't depend on local scope
|
|
||||||
trap "stop_line_spinner 2>/dev/null; rm -rf '$tmp'" EXIT
|
trap "stop_line_spinner 2>/dev/null; rm -rf '$tmp'" EXIT
|
||||||
|
|
||||||
local branch="${MOLE_VERSION:-}"
|
local branch="${MOLE_VERSION:-}"
|
||||||
@@ -120,7 +110,6 @@ resolve_source_dir() {
|
|||||||
fi
|
fi
|
||||||
local url="https://github.com/tw93/mole/archive/refs/heads/main.tar.gz"
|
local url="https://github.com/tw93/mole/archive/refs/heads/main.tar.gz"
|
||||||
|
|
||||||
# If a specific version is requested (e.g. V1.0.0), use the tag URL
|
|
||||||
if [[ "$branch" != "main" ]]; then
|
if [[ "$branch" != "main" ]]; then
|
||||||
url="https://github.com/tw93/mole/archive/refs/tags/${branch}.tar.gz"
|
url="https://github.com/tw93/mole/archive/refs/tags/${branch}.tar.gz"
|
||||||
fi
|
fi
|
||||||
@@ -131,8 +120,6 @@ resolve_source_dir() {
|
|||||||
if tar -xzf "$tmp/mole.tar.gz" -C "$tmp" 2> /dev/null; then
|
if tar -xzf "$tmp/mole.tar.gz" -C "$tmp" 2> /dev/null; then
|
||||||
stop_line_spinner
|
stop_line_spinner
|
||||||
|
|
||||||
# Find the extracted directory (name varies by tag/branch)
|
|
||||||
# It usually looks like Mole-main, mole-main, Mole-1.0.0, etc.
|
|
||||||
local extracted_dir
|
local extracted_dir
|
||||||
extracted_dir=$(find "$tmp" -mindepth 1 -maxdepth 1 -type d | head -n 1)
|
extracted_dir=$(find "$tmp" -mindepth 1 -maxdepth 1 -type d | head -n 1)
|
||||||
|
|
||||||
@@ -170,6 +157,7 @@ resolve_source_dir() {
|
|||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Version helpers
|
||||||
get_source_version() {
|
get_source_version() {
|
||||||
local source_mole="$SOURCE_DIR/mole"
|
local source_mole="$SOURCE_DIR/mole"
|
||||||
if [[ -f "$source_mole" ]]; then
|
if [[ -f "$source_mole" ]]; then
|
||||||
@@ -188,7 +176,6 @@ get_latest_release_tag() {
|
|||||||
if [[ -z "$tag" ]]; then
|
if [[ -z "$tag" ]]; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
# Return tag as-is; normalize_release_tag will handle standardization
|
|
||||||
printf '%s\n' "$tag"
|
printf '%s\n' "$tag"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -205,7 +192,6 @@ get_latest_release_tag_from_git() {
|
|||||||
|
|
||||||
normalize_release_tag() {
|
normalize_release_tag() {
|
||||||
local tag="$1"
|
local tag="$1"
|
||||||
# Remove all leading 'v' or 'V' prefixes (handle edge cases like VV1.0.0)
|
|
||||||
while [[ "$tag" =~ ^[vV] ]]; do
|
while [[ "$tag" =~ ^[vV] ]]; do
|
||||||
tag="${tag#v}"
|
tag="${tag#v}"
|
||||||
tag="${tag#V}"
|
tag="${tag#V}"
|
||||||
@@ -218,21 +204,18 @@ normalize_release_tag() {
|
|||||||
get_installed_version() {
|
get_installed_version() {
|
||||||
local binary="$INSTALL_DIR/mole"
|
local binary="$INSTALL_DIR/mole"
|
||||||
if [[ -x "$binary" ]]; then
|
if [[ -x "$binary" ]]; then
|
||||||
# Try running the binary first (preferred method)
|
|
||||||
local version
|
local version
|
||||||
version=$("$binary" --version 2> /dev/null | awk '/Mole version/ {print $NF; exit}')
|
version=$("$binary" --version 2> /dev/null | awk '/Mole version/ {print $NF; exit}')
|
||||||
if [[ -n "$version" ]]; then
|
if [[ -n "$version" ]]; then
|
||||||
echo "$version"
|
echo "$version"
|
||||||
else
|
else
|
||||||
# Fallback: parse VERSION from file (in case binary is broken)
|
|
||||||
sed -n 's/^VERSION="\(.*\)"$/\1/p' "$binary" | head -n1
|
sed -n 's/^VERSION="\(.*\)"$/\1/p' "$binary" | head -n1
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Parse command line arguments
|
# CLI parsing (supports main/latest and version tokens).
|
||||||
parse_args() {
|
parse_args() {
|
||||||
# Handle positional version selector in any position
|
|
||||||
local -a args=("$@")
|
local -a args=("$@")
|
||||||
local version_token=""
|
local version_token=""
|
||||||
local i
|
local i
|
||||||
@@ -248,14 +231,12 @@ parse_args() {
|
|||||||
fi
|
fi
|
||||||
case "$token" in
|
case "$token" in
|
||||||
latest | main)
|
latest | main)
|
||||||
# Install from main branch (edge/beta)
|
|
||||||
export MOLE_VERSION="main"
|
export MOLE_VERSION="main"
|
||||||
export MOLE_EDGE_INSTALL="true"
|
export MOLE_EDGE_INSTALL="true"
|
||||||
version_token="$token"
|
version_token="$token"
|
||||||
unset 'args[$i]'
|
unset 'args[$i]'
|
||||||
;;
|
;;
|
||||||
[0-9]* | V[0-9]* | v[0-9]*)
|
[0-9]* | V[0-9]* | v[0-9]*)
|
||||||
# Install specific version (e.g., 1.16.0, V1.16.0)
|
|
||||||
export MOLE_VERSION="$token"
|
export MOLE_VERSION="$token"
|
||||||
version_token="$token"
|
version_token="$token"
|
||||||
unset 'args[$i]'
|
unset 'args[$i]'
|
||||||
@@ -266,7 +247,6 @@ parse_args() {
|
|||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
# Use ${args[@]+...} pattern to safely handle sparse/empty arrays with set -u
|
|
||||||
if [[ ${#args[@]} -gt 0 ]]; then
|
if [[ ${#args[@]} -gt 0 ]]; then
|
||||||
set -- ${args[@]+"${args[@]}"}
|
set -- ${args[@]+"${args[@]}"}
|
||||||
else
|
else
|
||||||
@@ -311,17 +291,14 @@ parse_args() {
|
|||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
# Check system requirements
|
# Environment checks and directory setup
|
||||||
check_requirements() {
|
check_requirements() {
|
||||||
# Check if running on macOS
|
|
||||||
if [[ "$OSTYPE" != "darwin"* ]]; then
|
if [[ "$OSTYPE" != "darwin"* ]]; then
|
||||||
log_error "This tool is designed for macOS only"
|
log_error "This tool is designed for macOS only"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if already installed via Homebrew
|
|
||||||
if command -v brew > /dev/null 2>&1 && brew list mole > /dev/null 2>&1; then
|
if command -v brew > /dev/null 2>&1 && brew list mole > /dev/null 2>&1; then
|
||||||
# Verify that mole executable actually exists and is from Homebrew
|
|
||||||
local mole_path
|
local mole_path
|
||||||
mole_path=$(command -v mole 2> /dev/null || true)
|
mole_path=$(command -v mole 2> /dev/null || true)
|
||||||
local is_homebrew_binary=false
|
local is_homebrew_binary=false
|
||||||
@@ -332,7 +309,6 @@ check_requirements() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Only block installation if Homebrew binary actually exists
|
|
||||||
if [[ "$is_homebrew_binary" == "true" ]]; then
|
if [[ "$is_homebrew_binary" == "true" ]]; then
|
||||||
if [[ "$ACTION" == "update" ]]; then
|
if [[ "$ACTION" == "update" ]]; then
|
||||||
return 0
|
return 0
|
||||||
@@ -346,27 +322,22 @@ check_requirements() {
|
|||||||
echo ""
|
echo ""
|
||||||
exit 1
|
exit 1
|
||||||
else
|
else
|
||||||
# Brew has mole in database but binary doesn't exist - clean up
|
|
||||||
log_warning "Cleaning up stale Homebrew installation..."
|
log_warning "Cleaning up stale Homebrew installation..."
|
||||||
brew uninstall --force mole > /dev/null 2>&1 || true
|
brew uninstall --force mole > /dev/null 2>&1 || true
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if install directory exists and is writable
|
|
||||||
if [[ ! -d "$(dirname "$INSTALL_DIR")" ]]; then
|
if [[ ! -d "$(dirname "$INSTALL_DIR")" ]]; then
|
||||||
log_error "Parent directory $(dirname "$INSTALL_DIR") does not exist"
|
log_error "Parent directory $(dirname "$INSTALL_DIR") does not exist"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Create installation directories
|
|
||||||
create_directories() {
|
create_directories() {
|
||||||
# Create install directory if it doesn't exist
|
|
||||||
if [[ ! -d "$INSTALL_DIR" ]]; then
|
if [[ ! -d "$INSTALL_DIR" ]]; then
|
||||||
maybe_sudo mkdir -p "$INSTALL_DIR"
|
maybe_sudo mkdir -p "$INSTALL_DIR"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Create config directory
|
|
||||||
if ! mkdir -p "$CONFIG_DIR" "$CONFIG_DIR/bin" "$CONFIG_DIR/lib"; then
|
if ! mkdir -p "$CONFIG_DIR" "$CONFIG_DIR/bin" "$CONFIG_DIR/lib"; then
|
||||||
log_error "Failed to create config directory: $CONFIG_DIR"
|
log_error "Failed to create config directory: $CONFIG_DIR"
|
||||||
exit 1
|
exit 1
|
||||||
@@ -374,7 +345,7 @@ create_directories() {
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Build binary locally from source when download isn't available
|
# Binary install helpers
|
||||||
build_binary_from_source() {
|
build_binary_from_source() {
|
||||||
local binary_name="$1"
|
local binary_name="$1"
|
||||||
local target_path="$2"
|
local target_path="$2"
|
||||||
@@ -418,7 +389,6 @@ build_binary_from_source() {
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Download binary from release
|
|
||||||
download_binary() {
|
download_binary() {
|
||||||
local binary_name="$1"
|
local binary_name="$1"
|
||||||
local target_path="$CONFIG_DIR/bin/${binary_name}-go"
|
local target_path="$CONFIG_DIR/bin/${binary_name}-go"
|
||||||
@@ -429,8 +399,6 @@ download_binary() {
|
|||||||
arch_suffix="arm64"
|
arch_suffix="arm64"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Try to use local binary first (from build or source)
|
|
||||||
# Check for both standard name and cross-compiled name
|
|
||||||
if [[ -f "$SOURCE_DIR/bin/${binary_name}-go" ]]; then
|
if [[ -f "$SOURCE_DIR/bin/${binary_name}-go" ]]; then
|
||||||
cp "$SOURCE_DIR/bin/${binary_name}-go" "$target_path"
|
cp "$SOURCE_DIR/bin/${binary_name}-go" "$target_path"
|
||||||
chmod +x "$target_path"
|
chmod +x "$target_path"
|
||||||
@@ -443,7 +411,6 @@ download_binary() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Fallback to download
|
|
||||||
local version
|
local version
|
||||||
version=$(get_source_version)
|
version=$(get_source_version)
|
||||||
if [[ -z "$version" ]]; then
|
if [[ -z "$version" ]]; then
|
||||||
@@ -455,9 +422,7 @@ download_binary() {
|
|||||||
fi
|
fi
|
||||||
local url="https://github.com/tw93/mole/releases/download/V${version}/${binary_name}-darwin-${arch_suffix}"
|
local url="https://github.com/tw93/mole/releases/download/V${version}/${binary_name}-darwin-${arch_suffix}"
|
||||||
|
|
||||||
# Only attempt download if we have internet
|
# Skip preflight network checks to avoid false negatives.
|
||||||
# Note: Skip network check and let curl download handle connectivity issues
|
|
||||||
# This avoids false negatives from strict 2-second timeout
|
|
||||||
|
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
start_line_spinner "Downloading ${binary_name}..."
|
start_line_spinner "Downloading ${binary_name}..."
|
||||||
@@ -480,7 +445,7 @@ download_binary() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Install files
|
# File installation (bin/lib/scripts + go helpers).
|
||||||
install_files() {
|
install_files() {
|
||||||
|
|
||||||
resolve_source_dir
|
resolve_source_dir
|
||||||
@@ -492,7 +457,6 @@ install_files() {
|
|||||||
install_dir_abs="$(cd "$INSTALL_DIR" && pwd)"
|
install_dir_abs="$(cd "$INSTALL_DIR" && pwd)"
|
||||||
config_dir_abs="$(cd "$CONFIG_DIR" && pwd)"
|
config_dir_abs="$(cd "$CONFIG_DIR" && pwd)"
|
||||||
|
|
||||||
# Copy main executable when destination differs
|
|
||||||
if [[ -f "$SOURCE_DIR/mole" ]]; then
|
if [[ -f "$SOURCE_DIR/mole" ]]; then
|
||||||
if [[ "$source_dir_abs" != "$install_dir_abs" ]]; then
|
if [[ "$source_dir_abs" != "$install_dir_abs" ]]; then
|
||||||
if needs_sudo; then
|
if needs_sudo; then
|
||||||
@@ -507,7 +471,6 @@ install_files() {
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Install mo alias for Mole if available
|
|
||||||
if [[ -f "$SOURCE_DIR/mo" ]]; then
|
if [[ -f "$SOURCE_DIR/mo" ]]; then
|
||||||
if [[ "$source_dir_abs" == "$install_dir_abs" ]]; then
|
if [[ "$source_dir_abs" == "$install_dir_abs" ]]; then
|
||||||
log_success "mo alias already present"
|
log_success "mo alias already present"
|
||||||
@@ -518,7 +481,6 @@ install_files() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Copy configuration and modules
|
|
||||||
if [[ -d "$SOURCE_DIR/bin" ]]; then
|
if [[ -d "$SOURCE_DIR/bin" ]]; then
|
||||||
local source_bin_abs="$(cd "$SOURCE_DIR/bin" && pwd)"
|
local source_bin_abs="$(cd "$SOURCE_DIR/bin" && pwd)"
|
||||||
local config_bin_abs="$(cd "$CONFIG_DIR/bin" && pwd)"
|
local config_bin_abs="$(cd "$CONFIG_DIR/bin" && pwd)"
|
||||||
@@ -550,7 +512,6 @@ install_files() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Copy other files if they exist and directories differ
|
|
||||||
if [[ "$config_dir_abs" != "$source_dir_abs" ]]; then
|
if [[ "$config_dir_abs" != "$source_dir_abs" ]]; then
|
||||||
for file in README.md LICENSE install.sh; do
|
for file in README.md LICENSE install.sh; do
|
||||||
if [[ -f "$SOURCE_DIR/$file" ]]; then
|
if [[ -f "$SOURCE_DIR/$file" ]]; then
|
||||||
@@ -563,12 +524,10 @@ install_files() {
|
|||||||
chmod +x "$CONFIG_DIR/install.sh"
|
chmod +x "$CONFIG_DIR/install.sh"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Update the mole script to use the config directory when installed elsewhere
|
|
||||||
if [[ "$source_dir_abs" != "$install_dir_abs" ]]; then
|
if [[ "$source_dir_abs" != "$install_dir_abs" ]]; then
|
||||||
maybe_sudo sed -i '' "s|SCRIPT_DIR=.*|SCRIPT_DIR=\"$CONFIG_DIR\"|" "$INSTALL_DIR/mole"
|
maybe_sudo sed -i '' "s|SCRIPT_DIR=.*|SCRIPT_DIR=\"$CONFIG_DIR\"|" "$INSTALL_DIR/mole"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Install/Download Go binaries
|
|
||||||
if ! download_binary "analyze"; then
|
if ! download_binary "analyze"; then
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
@@ -577,12 +536,11 @@ install_files() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Verify installation
|
# Verification and PATH hint
|
||||||
verify_installation() {
|
verify_installation() {
|
||||||
|
|
||||||
if [[ -x "$INSTALL_DIR/mole" ]] && [[ -f "$CONFIG_DIR/lib/core/common.sh" ]]; then
|
if [[ -x "$INSTALL_DIR/mole" ]] && [[ -f "$CONFIG_DIR/lib/core/common.sh" ]]; then
|
||||||
|
|
||||||
# Test if mole command works
|
|
||||||
if "$INSTALL_DIR/mole" --help > /dev/null 2>&1; then
|
if "$INSTALL_DIR/mole" --help > /dev/null 2>&1; then
|
||||||
return 0
|
return 0
|
||||||
else
|
else
|
||||||
@@ -594,14 +552,11 @@ verify_installation() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Add to PATH if needed
|
|
||||||
setup_path() {
|
setup_path() {
|
||||||
# Check if install directory is in PATH
|
|
||||||
if [[ ":$PATH:" == *":$INSTALL_DIR:"* ]]; then
|
if [[ ":$PATH:" == *":$INSTALL_DIR:"* ]]; then
|
||||||
return
|
return
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Only suggest PATH setup for custom directories
|
|
||||||
if [[ "$INSTALL_DIR" != "/usr/local/bin" ]]; then
|
if [[ "$INSTALL_DIR" != "/usr/local/bin" ]]; then
|
||||||
log_warning "$INSTALL_DIR is not in your PATH"
|
log_warning "$INSTALL_DIR is not in your PATH"
|
||||||
echo ""
|
echo ""
|
||||||
@@ -659,7 +614,7 @@ print_usage_summary() {
|
|||||||
echo ""
|
echo ""
|
||||||
}
|
}
|
||||||
|
|
||||||
# Main installation function
|
# Main install/update flows
|
||||||
perform_install() {
|
perform_install() {
|
||||||
resolve_source_dir
|
resolve_source_dir
|
||||||
local source_version
|
local source_version
|
||||||
@@ -678,7 +633,7 @@ perform_install() {
|
|||||||
installed_version="$source_version"
|
installed_version="$source_version"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Add edge indicator for main branch installs
|
# Edge installs get a suffix to make the version explicit.
|
||||||
if [[ "${MOLE_EDGE_INSTALL:-}" == "true" ]]; then
|
if [[ "${MOLE_EDGE_INSTALL:-}" == "true" ]]; then
|
||||||
installed_version="${installed_version}-edge"
|
installed_version="${installed_version}-edge"
|
||||||
echo ""
|
echo ""
|
||||||
@@ -693,7 +648,6 @@ perform_update() {
|
|||||||
check_requirements
|
check_requirements
|
||||||
|
|
||||||
if command -v brew > /dev/null 2>&1 && brew list mole > /dev/null 2>&1; then
|
if command -v brew > /dev/null 2>&1 && brew list mole > /dev/null 2>&1; then
|
||||||
# Try to use shared function if available (when running from installed Mole)
|
|
||||||
resolve_source_dir 2> /dev/null || true
|
resolve_source_dir 2> /dev/null || true
|
||||||
local current_version
|
local current_version
|
||||||
current_version=$(get_installed_version || echo "unknown")
|
current_version=$(get_installed_version || echo "unknown")
|
||||||
@@ -702,7 +656,6 @@ perform_update() {
|
|||||||
source "$SOURCE_DIR/lib/core/common.sh"
|
source "$SOURCE_DIR/lib/core/common.sh"
|
||||||
update_via_homebrew "$current_version"
|
update_via_homebrew "$current_version"
|
||||||
else
|
else
|
||||||
# No common.sh available - provide helpful instructions
|
|
||||||
log_error "Cannot update Homebrew-managed Mole without full installation"
|
log_error "Cannot update Homebrew-managed Mole without full installation"
|
||||||
echo ""
|
echo ""
|
||||||
echo "Please update via Homebrew:"
|
echo "Please update via Homebrew:"
|
||||||
@@ -735,7 +688,6 @@ perform_update() {
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Update with minimal output (suppress info/success, show errors only)
|
|
||||||
local old_verbose=$VERBOSE
|
local old_verbose=$VERBOSE
|
||||||
VERBOSE=0
|
VERBOSE=0
|
||||||
create_directories || {
|
create_directories || {
|
||||||
@@ -766,7 +718,6 @@ perform_update() {
|
|||||||
echo -e "${GREEN}${ICON_SUCCESS}${NC} Updated to latest version ($updated_version)"
|
echo -e "${GREEN}${ICON_SUCCESS}${NC} Updated to latest version ($updated_version)"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Run requested action
|
|
||||||
parse_args "$@"
|
parse_args "$@"
|
||||||
|
|
||||||
case "$ACTION" in
|
case "$ACTION" in
|
||||||
|
|||||||
@@ -1,22 +1,19 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# User GUI Applications Cleanup Module
|
# User GUI Applications Cleanup Module (desktop apps, media, utilities).
|
||||||
# Desktop applications, communication tools, media players, games, utilities
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
# Clean Xcode and iOS development tools
|
# Xcode and iOS tooling.
|
||||||
clean_xcode_tools() {
|
clean_xcode_tools() {
|
||||||
# Check if Xcode is running for safer cleanup of critical resources
|
# Skip DerivedData/Archives while Xcode is running.
|
||||||
local xcode_running=false
|
local xcode_running=false
|
||||||
if pgrep -x "Xcode" > /dev/null 2>&1; then
|
if pgrep -x "Xcode" > /dev/null 2>&1; then
|
||||||
xcode_running=true
|
xcode_running=true
|
||||||
fi
|
fi
|
||||||
# Safe to clean regardless of Xcode state
|
|
||||||
safe_clean ~/Library/Developer/CoreSimulator/Caches/* "Simulator cache"
|
safe_clean ~/Library/Developer/CoreSimulator/Caches/* "Simulator cache"
|
||||||
safe_clean ~/Library/Developer/CoreSimulator/Devices/*/data/tmp/* "Simulator temp files"
|
safe_clean ~/Library/Developer/CoreSimulator/Devices/*/data/tmp/* "Simulator temp files"
|
||||||
safe_clean ~/Library/Caches/com.apple.dt.Xcode/* "Xcode cache"
|
safe_clean ~/Library/Caches/com.apple.dt.Xcode/* "Xcode cache"
|
||||||
safe_clean ~/Library/Developer/Xcode/iOS\ Device\ Logs/* "iOS device logs"
|
safe_clean ~/Library/Developer/Xcode/iOS\ Device\ Logs/* "iOS device logs"
|
||||||
safe_clean ~/Library/Developer/Xcode/watchOS\ Device\ Logs/* "watchOS device logs"
|
safe_clean ~/Library/Developer/Xcode/watchOS\ Device\ Logs/* "watchOS device logs"
|
||||||
safe_clean ~/Library/Developer/Xcode/Products/* "Xcode build products"
|
safe_clean ~/Library/Developer/Xcode/Products/* "Xcode build products"
|
||||||
# Clean build artifacts only if Xcode is not running
|
|
||||||
if [[ "$xcode_running" == "false" ]]; then
|
if [[ "$xcode_running" == "false" ]]; then
|
||||||
safe_clean ~/Library/Developer/Xcode/DerivedData/* "Xcode derived data"
|
safe_clean ~/Library/Developer/Xcode/DerivedData/* "Xcode derived data"
|
||||||
safe_clean ~/Library/Developer/Xcode/Archives/* "Xcode archives"
|
safe_clean ~/Library/Developer/Xcode/Archives/* "Xcode archives"
|
||||||
@@ -24,7 +21,7 @@ clean_xcode_tools() {
|
|||||||
echo -e " ${YELLOW}${ICON_WARNING}${NC} Xcode is running, skipping DerivedData and Archives cleanup"
|
echo -e " ${YELLOW}${ICON_WARNING}${NC} Xcode is running, skipping DerivedData and Archives cleanup"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean code editors (VS Code, Sublime, etc.)
|
# Code editors.
|
||||||
clean_code_editors() {
|
clean_code_editors() {
|
||||||
safe_clean ~/Library/Application\ Support/Code/logs/* "VS Code logs"
|
safe_clean ~/Library/Application\ Support/Code/logs/* "VS Code logs"
|
||||||
safe_clean ~/Library/Application\ Support/Code/Cache/* "VS Code cache"
|
safe_clean ~/Library/Application\ Support/Code/Cache/* "VS Code cache"
|
||||||
@@ -32,7 +29,7 @@ clean_code_editors() {
|
|||||||
safe_clean ~/Library/Application\ Support/Code/CachedData/* "VS Code data cache"
|
safe_clean ~/Library/Application\ Support/Code/CachedData/* "VS Code data cache"
|
||||||
safe_clean ~/Library/Caches/com.sublimetext.*/* "Sublime Text cache"
|
safe_clean ~/Library/Caches/com.sublimetext.*/* "Sublime Text cache"
|
||||||
}
|
}
|
||||||
# Clean communication apps (Slack, Discord, Zoom, etc.)
|
# Communication apps.
|
||||||
clean_communication_apps() {
|
clean_communication_apps() {
|
||||||
safe_clean ~/Library/Application\ Support/discord/Cache/* "Discord cache"
|
safe_clean ~/Library/Application\ Support/discord/Cache/* "Discord cache"
|
||||||
safe_clean ~/Library/Application\ Support/legcord/Cache/* "Legcord cache"
|
safe_clean ~/Library/Application\ Support/legcord/Cache/* "Legcord cache"
|
||||||
@@ -47,43 +44,43 @@ clean_communication_apps() {
|
|||||||
safe_clean ~/Library/Caches/com.tencent.WeWorkMac/* "WeCom cache"
|
safe_clean ~/Library/Caches/com.tencent.WeWorkMac/* "WeCom cache"
|
||||||
safe_clean ~/Library/Caches/com.feishu.*/* "Feishu cache"
|
safe_clean ~/Library/Caches/com.feishu.*/* "Feishu cache"
|
||||||
}
|
}
|
||||||
# Clean DingTalk
|
# DingTalk.
|
||||||
clean_dingtalk() {
|
clean_dingtalk() {
|
||||||
safe_clean ~/Library/Caches/dd.work.exclusive4aliding/* "DingTalk iDingTalk cache"
|
safe_clean ~/Library/Caches/dd.work.exclusive4aliding/* "DingTalk iDingTalk cache"
|
||||||
safe_clean ~/Library/Caches/com.alibaba.AliLang.osx/* "AliLang security component"
|
safe_clean ~/Library/Caches/com.alibaba.AliLang.osx/* "AliLang security component"
|
||||||
safe_clean ~/Library/Application\ Support/iDingTalk/log/* "DingTalk logs"
|
safe_clean ~/Library/Application\ Support/iDingTalk/log/* "DingTalk logs"
|
||||||
safe_clean ~/Library/Application\ Support/iDingTalk/holmeslogs/* "DingTalk holmes logs"
|
safe_clean ~/Library/Application\ Support/iDingTalk/holmeslogs/* "DingTalk holmes logs"
|
||||||
}
|
}
|
||||||
# Clean AI assistants
|
# AI assistants.
|
||||||
clean_ai_apps() {
|
clean_ai_apps() {
|
||||||
safe_clean ~/Library/Caches/com.openai.chat/* "ChatGPT cache"
|
safe_clean ~/Library/Caches/com.openai.chat/* "ChatGPT cache"
|
||||||
safe_clean ~/Library/Caches/com.anthropic.claudefordesktop/* "Claude desktop cache"
|
safe_clean ~/Library/Caches/com.anthropic.claudefordesktop/* "Claude desktop cache"
|
||||||
safe_clean ~/Library/Logs/Claude/* "Claude logs"
|
safe_clean ~/Library/Logs/Claude/* "Claude logs"
|
||||||
}
|
}
|
||||||
# Clean design and creative tools
|
# Design and creative tools.
|
||||||
clean_design_tools() {
|
clean_design_tools() {
|
||||||
safe_clean ~/Library/Caches/com.bohemiancoding.sketch3/* "Sketch cache"
|
safe_clean ~/Library/Caches/com.bohemiancoding.sketch3/* "Sketch cache"
|
||||||
safe_clean ~/Library/Application\ Support/com.bohemiancoding.sketch3/cache/* "Sketch app cache"
|
safe_clean ~/Library/Application\ Support/com.bohemiancoding.sketch3/cache/* "Sketch app cache"
|
||||||
safe_clean ~/Library/Caches/Adobe/* "Adobe cache"
|
safe_clean ~/Library/Caches/Adobe/* "Adobe cache"
|
||||||
safe_clean ~/Library/Caches/com.adobe.*/* "Adobe app caches"
|
safe_clean ~/Library/Caches/com.adobe.*/* "Adobe app caches"
|
||||||
safe_clean ~/Library/Caches/com.figma.Desktop/* "Figma cache"
|
safe_clean ~/Library/Caches/com.figma.Desktop/* "Figma cache"
|
||||||
# Note: Raycast cache is protected - contains clipboard history (including images)
|
# Raycast cache is protected (clipboard history, images).
|
||||||
}
|
}
|
||||||
# Clean video editing tools
|
# Video editing tools.
|
||||||
clean_video_tools() {
|
clean_video_tools() {
|
||||||
safe_clean ~/Library/Caches/net.telestream.screenflow10/* "ScreenFlow cache"
|
safe_clean ~/Library/Caches/net.telestream.screenflow10/* "ScreenFlow cache"
|
||||||
safe_clean ~/Library/Caches/com.apple.FinalCut/* "Final Cut Pro cache"
|
safe_clean ~/Library/Caches/com.apple.FinalCut/* "Final Cut Pro cache"
|
||||||
safe_clean ~/Library/Caches/com.blackmagic-design.DaVinciResolve/* "DaVinci Resolve cache"
|
safe_clean ~/Library/Caches/com.blackmagic-design.DaVinciResolve/* "DaVinci Resolve cache"
|
||||||
safe_clean ~/Library/Caches/com.adobe.PremierePro.*/* "Premiere Pro cache"
|
safe_clean ~/Library/Caches/com.adobe.PremierePro.*/* "Premiere Pro cache"
|
||||||
}
|
}
|
||||||
# Clean 3D and CAD tools
|
# 3D and CAD tools.
|
||||||
clean_3d_tools() {
|
clean_3d_tools() {
|
||||||
safe_clean ~/Library/Caches/org.blenderfoundation.blender/* "Blender cache"
|
safe_clean ~/Library/Caches/org.blenderfoundation.blender/* "Blender cache"
|
||||||
safe_clean ~/Library/Caches/com.maxon.cinema4d/* "Cinema 4D cache"
|
safe_clean ~/Library/Caches/com.maxon.cinema4d/* "Cinema 4D cache"
|
||||||
safe_clean ~/Library/Caches/com.autodesk.*/* "Autodesk cache"
|
safe_clean ~/Library/Caches/com.autodesk.*/* "Autodesk cache"
|
||||||
safe_clean ~/Library/Caches/com.sketchup.*/* "SketchUp cache"
|
safe_clean ~/Library/Caches/com.sketchup.*/* "SketchUp cache"
|
||||||
}
|
}
|
||||||
# Clean productivity apps
|
# Productivity apps.
|
||||||
clean_productivity_apps() {
|
clean_productivity_apps() {
|
||||||
safe_clean ~/Library/Caches/com.tw93.MiaoYan/* "MiaoYan cache"
|
safe_clean ~/Library/Caches/com.tw93.MiaoYan/* "MiaoYan cache"
|
||||||
safe_clean ~/Library/Caches/com.klee.desktop/* "Klee cache"
|
safe_clean ~/Library/Caches/com.klee.desktop/* "Klee cache"
|
||||||
@@ -92,20 +89,18 @@ clean_productivity_apps() {
|
|||||||
safe_clean ~/Library/Caches/com.filo.client/* "Filo cache"
|
safe_clean ~/Library/Caches/com.filo.client/* "Filo cache"
|
||||||
safe_clean ~/Library/Caches/com.flomoapp.mac/* "Flomo cache"
|
safe_clean ~/Library/Caches/com.flomoapp.mac/* "Flomo cache"
|
||||||
}
|
}
|
||||||
# Clean music and media players (protects Spotify offline music)
|
# Music/media players (protect Spotify offline music).
|
||||||
clean_media_players() {
|
clean_media_players() {
|
||||||
# Spotify cache protection: check for offline music indicators
|
|
||||||
local spotify_cache="$HOME/Library/Caches/com.spotify.client"
|
local spotify_cache="$HOME/Library/Caches/com.spotify.client"
|
||||||
local spotify_data="$HOME/Library/Application Support/Spotify"
|
local spotify_data="$HOME/Library/Application Support/Spotify"
|
||||||
local has_offline_music=false
|
local has_offline_music=false
|
||||||
# Check for offline music database or large cache (>500MB)
|
# Heuristics: offline DB or large cache.
|
||||||
if [[ -f "$spotify_data/PersistentCache/Storage/offline.bnk" ]] ||
|
if [[ -f "$spotify_data/PersistentCache/Storage/offline.bnk" ]] ||
|
||||||
[[ -d "$spotify_data/PersistentCache/Storage" && -n "$(find "$spotify_data/PersistentCache/Storage" -type f -name "*.file" 2> /dev/null | head -1)" ]]; then
|
[[ -d "$spotify_data/PersistentCache/Storage" && -n "$(find "$spotify_data/PersistentCache/Storage" -type f -name "*.file" 2> /dev/null | head -1)" ]]; then
|
||||||
has_offline_music=true
|
has_offline_music=true
|
||||||
elif [[ -d "$spotify_cache" ]]; then
|
elif [[ -d "$spotify_cache" ]]; then
|
||||||
local cache_size_kb
|
local cache_size_kb
|
||||||
cache_size_kb=$(get_path_size_kb "$spotify_cache")
|
cache_size_kb=$(get_path_size_kb "$spotify_cache")
|
||||||
# Large cache (>500MB) likely contains offline music
|
|
||||||
if [[ $cache_size_kb -ge 512000 ]]; then
|
if [[ $cache_size_kb -ge 512000 ]]; then
|
||||||
has_offline_music=true
|
has_offline_music=true
|
||||||
fi
|
fi
|
||||||
@@ -125,7 +120,7 @@ clean_media_players() {
|
|||||||
safe_clean ~/Library/Caches/com.kugou.mac/* "Kugou Music cache"
|
safe_clean ~/Library/Caches/com.kugou.mac/* "Kugou Music cache"
|
||||||
safe_clean ~/Library/Caches/com.kuwo.mac/* "Kuwo Music cache"
|
safe_clean ~/Library/Caches/com.kuwo.mac/* "Kuwo Music cache"
|
||||||
}
|
}
|
||||||
# Clean video players
|
# Video players.
|
||||||
clean_video_players() {
|
clean_video_players() {
|
||||||
safe_clean ~/Library/Caches/com.colliderli.iina "IINA cache"
|
safe_clean ~/Library/Caches/com.colliderli.iina "IINA cache"
|
||||||
safe_clean ~/Library/Caches/org.videolan.vlc "VLC cache"
|
safe_clean ~/Library/Caches/org.videolan.vlc "VLC cache"
|
||||||
@@ -136,7 +131,7 @@ clean_video_players() {
|
|||||||
safe_clean ~/Library/Caches/com.douyu.*/* "Douyu cache"
|
safe_clean ~/Library/Caches/com.douyu.*/* "Douyu cache"
|
||||||
safe_clean ~/Library/Caches/com.huya.*/* "Huya cache"
|
safe_clean ~/Library/Caches/com.huya.*/* "Huya cache"
|
||||||
}
|
}
|
||||||
# Clean download managers
|
# Download managers.
|
||||||
clean_download_managers() {
|
clean_download_managers() {
|
||||||
safe_clean ~/Library/Caches/net.xmac.aria2gui "Aria2 cache"
|
safe_clean ~/Library/Caches/net.xmac.aria2gui "Aria2 cache"
|
||||||
safe_clean ~/Library/Caches/org.m0k.transmission "Transmission cache"
|
safe_clean ~/Library/Caches/org.m0k.transmission "Transmission cache"
|
||||||
@@ -145,7 +140,7 @@ clean_download_managers() {
|
|||||||
safe_clean ~/Library/Caches/com.folx.*/* "Folx cache"
|
safe_clean ~/Library/Caches/com.folx.*/* "Folx cache"
|
||||||
safe_clean ~/Library/Caches/com.charlessoft.pacifist/* "Pacifist cache"
|
safe_clean ~/Library/Caches/com.charlessoft.pacifist/* "Pacifist cache"
|
||||||
}
|
}
|
||||||
# Clean gaming platforms
|
# Gaming platforms.
|
||||||
clean_gaming_platforms() {
|
clean_gaming_platforms() {
|
||||||
safe_clean ~/Library/Caches/com.valvesoftware.steam/* "Steam cache"
|
safe_clean ~/Library/Caches/com.valvesoftware.steam/* "Steam cache"
|
||||||
safe_clean ~/Library/Application\ Support/Steam/htmlcache/* "Steam web cache"
|
safe_clean ~/Library/Application\ Support/Steam/htmlcache/* "Steam web cache"
|
||||||
@@ -156,41 +151,41 @@ clean_gaming_platforms() {
|
|||||||
safe_clean ~/Library/Caches/com.gog.galaxy/* "GOG Galaxy cache"
|
safe_clean ~/Library/Caches/com.gog.galaxy/* "GOG Galaxy cache"
|
||||||
safe_clean ~/Library/Caches/com.riotgames.*/* "Riot Games cache"
|
safe_clean ~/Library/Caches/com.riotgames.*/* "Riot Games cache"
|
||||||
}
|
}
|
||||||
# Clean translation and dictionary apps
|
# Translation/dictionary apps.
|
||||||
clean_translation_apps() {
|
clean_translation_apps() {
|
||||||
safe_clean ~/Library/Caches/com.youdao.YoudaoDict "Youdao Dictionary cache"
|
safe_clean ~/Library/Caches/com.youdao.YoudaoDict "Youdao Dictionary cache"
|
||||||
safe_clean ~/Library/Caches/com.eudic.* "Eudict cache"
|
safe_clean ~/Library/Caches/com.eudic.* "Eudict cache"
|
||||||
safe_clean ~/Library/Caches/com.bob-build.Bob "Bob Translation cache"
|
safe_clean ~/Library/Caches/com.bob-build.Bob "Bob Translation cache"
|
||||||
}
|
}
|
||||||
# Clean screenshot and screen recording tools
|
# Screenshot/recording tools.
|
||||||
clean_screenshot_tools() {
|
clean_screenshot_tools() {
|
||||||
safe_clean ~/Library/Caches/com.cleanshot.* "CleanShot cache"
|
safe_clean ~/Library/Caches/com.cleanshot.* "CleanShot cache"
|
||||||
safe_clean ~/Library/Caches/com.reincubate.camo "Camo cache"
|
safe_clean ~/Library/Caches/com.reincubate.camo "Camo cache"
|
||||||
safe_clean ~/Library/Caches/com.xnipapp.xnip "Xnip cache"
|
safe_clean ~/Library/Caches/com.xnipapp.xnip "Xnip cache"
|
||||||
}
|
}
|
||||||
# Clean email clients
|
# Email clients.
|
||||||
clean_email_clients() {
|
clean_email_clients() {
|
||||||
safe_clean ~/Library/Caches/com.readdle.smartemail-Mac "Spark cache"
|
safe_clean ~/Library/Caches/com.readdle.smartemail-Mac "Spark cache"
|
||||||
safe_clean ~/Library/Caches/com.airmail.* "Airmail cache"
|
safe_clean ~/Library/Caches/com.airmail.* "Airmail cache"
|
||||||
}
|
}
|
||||||
# Clean task management apps
|
# Task management apps.
|
||||||
clean_task_apps() {
|
clean_task_apps() {
|
||||||
safe_clean ~/Library/Caches/com.todoist.mac.Todoist "Todoist cache"
|
safe_clean ~/Library/Caches/com.todoist.mac.Todoist "Todoist cache"
|
||||||
safe_clean ~/Library/Caches/com.any.do.* "Any.do cache"
|
safe_clean ~/Library/Caches/com.any.do.* "Any.do cache"
|
||||||
}
|
}
|
||||||
# Clean shell and terminal utilities
|
# Shell/terminal utilities.
|
||||||
clean_shell_utils() {
|
clean_shell_utils() {
|
||||||
safe_clean ~/.zcompdump* "Zsh completion cache"
|
safe_clean ~/.zcompdump* "Zsh completion cache"
|
||||||
safe_clean ~/.lesshst "less history"
|
safe_clean ~/.lesshst "less history"
|
||||||
safe_clean ~/.viminfo.tmp "Vim temporary files"
|
safe_clean ~/.viminfo.tmp "Vim temporary files"
|
||||||
safe_clean ~/.wget-hsts "wget HSTS cache"
|
safe_clean ~/.wget-hsts "wget HSTS cache"
|
||||||
}
|
}
|
||||||
# Clean input method and system utilities
|
# Input methods and system utilities.
|
||||||
clean_system_utils() {
|
clean_system_utils() {
|
||||||
safe_clean ~/Library/Caches/com.runjuu.Input-Source-Pro/* "Input Source Pro cache"
|
safe_clean ~/Library/Caches/com.runjuu.Input-Source-Pro/* "Input Source Pro cache"
|
||||||
safe_clean ~/Library/Caches/macos-wakatime.WakaTime/* "WakaTime cache"
|
safe_clean ~/Library/Caches/macos-wakatime.WakaTime/* "WakaTime cache"
|
||||||
}
|
}
|
||||||
# Clean note-taking apps
|
# Note-taking apps.
|
||||||
clean_note_apps() {
|
clean_note_apps() {
|
||||||
safe_clean ~/Library/Caches/notion.id/* "Notion cache"
|
safe_clean ~/Library/Caches/notion.id/* "Notion cache"
|
||||||
safe_clean ~/Library/Caches/md.obsidian/* "Obsidian cache"
|
safe_clean ~/Library/Caches/md.obsidian/* "Obsidian cache"
|
||||||
@@ -199,19 +194,19 @@ clean_note_apps() {
|
|||||||
safe_clean ~/Library/Caches/com.evernote.*/* "Evernote cache"
|
safe_clean ~/Library/Caches/com.evernote.*/* "Evernote cache"
|
||||||
safe_clean ~/Library/Caches/com.yinxiang.*/* "Yinxiang Note cache"
|
safe_clean ~/Library/Caches/com.yinxiang.*/* "Yinxiang Note cache"
|
||||||
}
|
}
|
||||||
# Clean launcher and automation tools
|
# Launchers and automation tools.
|
||||||
clean_launcher_apps() {
|
clean_launcher_apps() {
|
||||||
safe_clean ~/Library/Caches/com.runningwithcrayons.Alfred/* "Alfred cache"
|
safe_clean ~/Library/Caches/com.runningwithcrayons.Alfred/* "Alfred cache"
|
||||||
safe_clean ~/Library/Caches/cx.c3.theunarchiver/* "The Unarchiver cache"
|
safe_clean ~/Library/Caches/cx.c3.theunarchiver/* "The Unarchiver cache"
|
||||||
}
|
}
|
||||||
# Clean remote desktop tools
|
# Remote desktop tools.
|
||||||
clean_remote_desktop() {
|
clean_remote_desktop() {
|
||||||
safe_clean ~/Library/Caches/com.teamviewer.*/* "TeamViewer cache"
|
safe_clean ~/Library/Caches/com.teamviewer.*/* "TeamViewer cache"
|
||||||
safe_clean ~/Library/Caches/com.anydesk.*/* "AnyDesk cache"
|
safe_clean ~/Library/Caches/com.anydesk.*/* "AnyDesk cache"
|
||||||
safe_clean ~/Library/Caches/com.todesk.*/* "ToDesk cache"
|
safe_clean ~/Library/Caches/com.todesk.*/* "ToDesk cache"
|
||||||
safe_clean ~/Library/Caches/com.sunlogin.*/* "Sunlogin cache"
|
safe_clean ~/Library/Caches/com.sunlogin.*/* "Sunlogin cache"
|
||||||
}
|
}
|
||||||
# Main function to clean all user GUI applications
|
# Main entry for GUI app cleanup.
|
||||||
clean_user_gui_applications() {
|
clean_user_gui_applications() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
clean_xcode_tools
|
clean_xcode_tools
|
||||||
|
|||||||
@@ -2,7 +2,6 @@
|
|||||||
# Application Data Cleanup Module
|
# Application Data Cleanup Module
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
# Args: $1=target_dir, $2=label
|
# Args: $1=target_dir, $2=label
|
||||||
# Clean .DS_Store (Finder metadata), home uses maxdepth 5, excludes slow paths, max 500 files
|
|
||||||
clean_ds_store_tree() {
|
clean_ds_store_tree() {
|
||||||
local target="$1"
|
local target="$1"
|
||||||
local label="$2"
|
local label="$2"
|
||||||
@@ -15,7 +14,6 @@ clean_ds_store_tree() {
|
|||||||
start_inline_spinner "Cleaning Finder metadata..."
|
start_inline_spinner "Cleaning Finder metadata..."
|
||||||
spinner_active="true"
|
spinner_active="true"
|
||||||
fi
|
fi
|
||||||
# Build exclusion paths for find (skip common slow/large directories)
|
|
||||||
local -a exclude_paths=(
|
local -a exclude_paths=(
|
||||||
-path "*/Library/Application Support/MobileSync" -prune -o
|
-path "*/Library/Application Support/MobileSync" -prune -o
|
||||||
-path "*/Library/Developer" -prune -o
|
-path "*/Library/Developer" -prune -o
|
||||||
@@ -24,13 +22,11 @@ clean_ds_store_tree() {
|
|||||||
-path "*/.git" -prune -o
|
-path "*/.git" -prune -o
|
||||||
-path "*/Library/Caches" -prune -o
|
-path "*/Library/Caches" -prune -o
|
||||||
)
|
)
|
||||||
# Build find command to avoid unbound array expansion with set -u
|
|
||||||
local -a find_cmd=("command" "find" "$target")
|
local -a find_cmd=("command" "find" "$target")
|
||||||
if [[ "$target" == "$HOME" ]]; then
|
if [[ "$target" == "$HOME" ]]; then
|
||||||
find_cmd+=("-maxdepth" "5")
|
find_cmd+=("-maxdepth" "5")
|
||||||
fi
|
fi
|
||||||
find_cmd+=("${exclude_paths[@]}" "-type" "f" "-name" ".DS_Store" "-print0")
|
find_cmd+=("${exclude_paths[@]}" "-type" "f" "-name" ".DS_Store" "-print0")
|
||||||
# Find .DS_Store files with exclusions and depth limit
|
|
||||||
while IFS= read -r -d '' ds_file; do
|
while IFS= read -r -d '' ds_file; do
|
||||||
local size
|
local size
|
||||||
size=$(get_file_size "$ds_file")
|
size=$(get_file_size "$ds_file")
|
||||||
@@ -61,14 +57,11 @@ clean_ds_store_tree() {
|
|||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean data for uninstalled apps (caches/logs/states older than 60 days)
|
# Orphaned app data (60+ days inactive). Env: ORPHAN_AGE_THRESHOLD, DRY_RUN
|
||||||
# Protects system apps, major vendors, scans /Applications+running processes
|
|
||||||
# Max 100 items/pattern, 2s du timeout. Env: ORPHAN_AGE_THRESHOLD, DRY_RUN
|
|
||||||
# Usage: scan_installed_apps "output_file"
|
# Usage: scan_installed_apps "output_file"
|
||||||
# Scan system for installed application bundle IDs
|
|
||||||
scan_installed_apps() {
|
scan_installed_apps() {
|
||||||
local installed_bundles="$1"
|
local installed_bundles="$1"
|
||||||
# Performance optimization: cache results for 5 minutes
|
# Cache installed app scan briefly to speed repeated runs.
|
||||||
local cache_file="$HOME/.cache/mole/installed_apps_cache"
|
local cache_file="$HOME/.cache/mole/installed_apps_cache"
|
||||||
local cache_age_seconds=300 # 5 minutes
|
local cache_age_seconds=300 # 5 minutes
|
||||||
if [[ -f "$cache_file" ]]; then
|
if [[ -f "$cache_file" ]]; then
|
||||||
@@ -77,7 +70,6 @@ scan_installed_apps() {
|
|||||||
local age=$((current_time - cache_mtime))
|
local age=$((current_time - cache_mtime))
|
||||||
if [[ $age -lt $cache_age_seconds ]]; then
|
if [[ $age -lt $cache_age_seconds ]]; then
|
||||||
debug_log "Using cached app list (age: ${age}s)"
|
debug_log "Using cached app list (age: ${age}s)"
|
||||||
# Verify cache file is readable and not empty
|
|
||||||
if [[ -r "$cache_file" ]] && [[ -s "$cache_file" ]]; then
|
if [[ -r "$cache_file" ]] && [[ -s "$cache_file" ]]; then
|
||||||
if cat "$cache_file" > "$installed_bundles" 2> /dev/null; then
|
if cat "$cache_file" > "$installed_bundles" 2> /dev/null; then
|
||||||
return 0
|
return 0
|
||||||
@@ -90,26 +82,22 @@ scan_installed_apps() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
debug_log "Scanning installed applications (cache expired or missing)"
|
debug_log "Scanning installed applications (cache expired or missing)"
|
||||||
# Scan all Applications directories
|
|
||||||
local -a app_dirs=(
|
local -a app_dirs=(
|
||||||
"/Applications"
|
"/Applications"
|
||||||
"/System/Applications"
|
"/System/Applications"
|
||||||
"$HOME/Applications"
|
"$HOME/Applications"
|
||||||
)
|
)
|
||||||
# Create a temp dir for parallel results to avoid write contention
|
# Temp dir avoids write contention across parallel scans.
|
||||||
local scan_tmp_dir=$(create_temp_dir)
|
local scan_tmp_dir=$(create_temp_dir)
|
||||||
# Parallel scan for applications
|
|
||||||
local pids=()
|
local pids=()
|
||||||
local dir_idx=0
|
local dir_idx=0
|
||||||
for app_dir in "${app_dirs[@]}"; do
|
for app_dir in "${app_dirs[@]}"; do
|
||||||
[[ -d "$app_dir" ]] || continue
|
[[ -d "$app_dir" ]] || continue
|
||||||
(
|
(
|
||||||
# Quickly find all .app bundles first
|
|
||||||
local -a app_paths=()
|
local -a app_paths=()
|
||||||
while IFS= read -r app_path; do
|
while IFS= read -r app_path; do
|
||||||
[[ -n "$app_path" ]] && app_paths+=("$app_path")
|
[[ -n "$app_path" ]] && app_paths+=("$app_path")
|
||||||
done < <(find "$app_dir" -name '*.app' -maxdepth 3 -type d 2> /dev/null)
|
done < <(find "$app_dir" -name '*.app' -maxdepth 3 -type d 2> /dev/null)
|
||||||
# Read bundle IDs with PlistBuddy
|
|
||||||
local count=0
|
local count=0
|
||||||
for app_path in "${app_paths[@]:-}"; do
|
for app_path in "${app_paths[@]:-}"; do
|
||||||
local plist_path="$app_path/Contents/Info.plist"
|
local plist_path="$app_path/Contents/Info.plist"
|
||||||
@@ -124,7 +112,7 @@ scan_installed_apps() {
|
|||||||
pids+=($!)
|
pids+=($!)
|
||||||
((dir_idx++))
|
((dir_idx++))
|
||||||
done
|
done
|
||||||
# Get running applications and LaunchAgents in parallel
|
# Collect running apps and LaunchAgents to avoid false orphan cleanup.
|
||||||
(
|
(
|
||||||
local running_apps=$(run_with_timeout 5 osascript -e 'tell application "System Events" to get bundle identifier of every application process' 2> /dev/null || echo "")
|
local running_apps=$(run_with_timeout 5 osascript -e 'tell application "System Events" to get bundle identifier of every application process' 2> /dev/null || echo "")
|
||||||
echo "$running_apps" | tr ',' '\n' | sed -e 's/^ *//;s/ *$//' -e '/^$/d' > "$scan_tmp_dir/running.txt"
|
echo "$running_apps" | tr ',' '\n' | sed -e 's/^ *//;s/ *$//' -e '/^$/d' > "$scan_tmp_dir/running.txt"
|
||||||
@@ -136,7 +124,6 @@ scan_installed_apps() {
|
|||||||
xargs -I {} basename {} .plist > "$scan_tmp_dir/agents.txt" 2> /dev/null || true
|
xargs -I {} basename {} .plist > "$scan_tmp_dir/agents.txt" 2> /dev/null || true
|
||||||
) &
|
) &
|
||||||
pids+=($!)
|
pids+=($!)
|
||||||
# Wait for all background scans to complete
|
|
||||||
debug_log "Waiting for ${#pids[@]} background processes: ${pids[*]}"
|
debug_log "Waiting for ${#pids[@]} background processes: ${pids[*]}"
|
||||||
for pid in "${pids[@]}"; do
|
for pid in "${pids[@]}"; do
|
||||||
wait "$pid" 2> /dev/null || true
|
wait "$pid" 2> /dev/null || true
|
||||||
@@ -145,37 +132,30 @@ scan_installed_apps() {
|
|||||||
cat "$scan_tmp_dir"/*.txt >> "$installed_bundles" 2> /dev/null || true
|
cat "$scan_tmp_dir"/*.txt >> "$installed_bundles" 2> /dev/null || true
|
||||||
safe_remove "$scan_tmp_dir" true
|
safe_remove "$scan_tmp_dir" true
|
||||||
sort -u "$installed_bundles" -o "$installed_bundles"
|
sort -u "$installed_bundles" -o "$installed_bundles"
|
||||||
# Cache the results
|
|
||||||
ensure_user_dir "$(dirname "$cache_file")"
|
ensure_user_dir "$(dirname "$cache_file")"
|
||||||
cp "$installed_bundles" "$cache_file" 2> /dev/null || true
|
cp "$installed_bundles" "$cache_file" 2> /dev/null || true
|
||||||
local app_count=$(wc -l < "$installed_bundles" 2> /dev/null | tr -d ' ')
|
local app_count=$(wc -l < "$installed_bundles" 2> /dev/null | tr -d ' ')
|
||||||
debug_log "Scanned $app_count unique applications"
|
debug_log "Scanned $app_count unique applications"
|
||||||
}
|
}
|
||||||
# Usage: is_bundle_orphaned "bundle_id" "directory_path" "installed_bundles_file"
|
# Usage: is_bundle_orphaned "bundle_id" "directory_path" "installed_bundles_file"
|
||||||
# Check if bundle is orphaned
|
|
||||||
is_bundle_orphaned() {
|
is_bundle_orphaned() {
|
||||||
local bundle_id="$1"
|
local bundle_id="$1"
|
||||||
local directory_path="$2"
|
local directory_path="$2"
|
||||||
local installed_bundles="$3"
|
local installed_bundles="$3"
|
||||||
# Skip system-critical and protected apps
|
|
||||||
if should_protect_data "$bundle_id"; then
|
if should_protect_data "$bundle_id"; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
# Check if app exists in our scan
|
|
||||||
if grep -Fxq "$bundle_id" "$installed_bundles" 2> /dev/null; then
|
if grep -Fxq "$bundle_id" "$installed_bundles" 2> /dev/null; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
# Check against centralized protected patterns (app_protection.sh)
|
|
||||||
if should_protect_data "$bundle_id"; then
|
if should_protect_data "$bundle_id"; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
# Extra check for specific system bundles not covered by patterns
|
|
||||||
case "$bundle_id" in
|
case "$bundle_id" in
|
||||||
loginwindow | dock | systempreferences | systemsettings | settings | controlcenter | finder | safari)
|
loginwindow | dock | systempreferences | systemsettings | settings | controlcenter | finder | safari)
|
||||||
return 1
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
# Check file age - only clean if 60+ days inactive
|
|
||||||
if [[ -e "$directory_path" ]]; then
|
if [[ -e "$directory_path" ]]; then
|
||||||
local last_modified_epoch=$(get_file_mtime "$directory_path")
|
local last_modified_epoch=$(get_file_mtime "$directory_path")
|
||||||
local current_epoch=$(date +%s)
|
local current_epoch=$(date +%s)
|
||||||
@@ -186,31 +166,23 @@ is_bundle_orphaned() {
|
|||||||
fi
|
fi
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Clean data for uninstalled apps (caches/logs/states older than 60 days)
|
# Orphaned app data sweep.
|
||||||
# Max 100 items/pattern, 2s du timeout. Env: ORPHAN_AGE_THRESHOLD, DRY_RUN
|
|
||||||
# Protects system apps, major vendors, scans /Applications+running processes
|
|
||||||
clean_orphaned_app_data() {
|
clean_orphaned_app_data() {
|
||||||
# Quick permission check - if we can't access Library folders, skip
|
|
||||||
if ! ls "$HOME/Library/Caches" > /dev/null 2>&1; then
|
if ! ls "$HOME/Library/Caches" > /dev/null 2>&1; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
echo -e " ${YELLOW}${ICON_WARNING}${NC} Skipped: No permission to access Library folders"
|
echo -e " ${YELLOW}${ICON_WARNING}${NC} Skipped: No permission to access Library folders"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# Build list of installed/active apps
|
|
||||||
start_section_spinner "Scanning installed apps..."
|
start_section_spinner "Scanning installed apps..."
|
||||||
local installed_bundles=$(create_temp_file)
|
local installed_bundles=$(create_temp_file)
|
||||||
scan_installed_apps "$installed_bundles"
|
scan_installed_apps "$installed_bundles"
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
# Display scan results
|
|
||||||
local app_count=$(wc -l < "$installed_bundles" 2> /dev/null | tr -d ' ')
|
local app_count=$(wc -l < "$installed_bundles" 2> /dev/null | tr -d ' ')
|
||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Found $app_count active/installed apps"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Found $app_count active/installed apps"
|
||||||
# Track statistics
|
|
||||||
local orphaned_count=0
|
local orphaned_count=0
|
||||||
local total_orphaned_kb=0
|
local total_orphaned_kb=0
|
||||||
# Unified orphaned resource scanner (caches, logs, states, webkit, HTTP, cookies)
|
|
||||||
start_section_spinner "Scanning orphaned app resources..."
|
start_section_spinner "Scanning orphaned app resources..."
|
||||||
# Define resource types to scan
|
# CRITICAL: NEVER add LaunchAgents or LaunchDaemons (breaks login items/startup apps).
|
||||||
# CRITICAL: NEVER add LaunchAgents or LaunchDaemons (breaks login items/startup apps)
|
|
||||||
local -a resource_types=(
|
local -a resource_types=(
|
||||||
"$HOME/Library/Caches|Caches|com.*:org.*:net.*:io.*"
|
"$HOME/Library/Caches|Caches|com.*:org.*:net.*:io.*"
|
||||||
"$HOME/Library/Logs|Logs|com.*:org.*:net.*:io.*"
|
"$HOME/Library/Logs|Logs|com.*:org.*:net.*:io.*"
|
||||||
@@ -222,38 +194,29 @@ clean_orphaned_app_data() {
|
|||||||
orphaned_count=0
|
orphaned_count=0
|
||||||
for resource_type in "${resource_types[@]}"; do
|
for resource_type in "${resource_types[@]}"; do
|
||||||
IFS='|' read -r base_path label patterns <<< "$resource_type"
|
IFS='|' read -r base_path label patterns <<< "$resource_type"
|
||||||
# Check both existence and permission to avoid hanging
|
|
||||||
if [[ ! -d "$base_path" ]]; then
|
if [[ ! -d "$base_path" ]]; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
# Quick permission check - if we can't ls the directory, skip it
|
|
||||||
if ! ls "$base_path" > /dev/null 2>&1; then
|
if ! ls "$base_path" > /dev/null 2>&1; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
# Build file pattern array
|
|
||||||
local -a file_patterns=()
|
local -a file_patterns=()
|
||||||
IFS=':' read -ra pattern_arr <<< "$patterns"
|
IFS=':' read -ra pattern_arr <<< "$patterns"
|
||||||
for pat in "${pattern_arr[@]}"; do
|
for pat in "${pattern_arr[@]}"; do
|
||||||
file_patterns+=("$base_path/$pat")
|
file_patterns+=("$base_path/$pat")
|
||||||
done
|
done
|
||||||
# Scan and clean orphaned items
|
|
||||||
for item_path in "${file_patterns[@]}"; do
|
for item_path in "${file_patterns[@]}"; do
|
||||||
# Use shell glob (no ls needed)
|
|
||||||
# Limit iterations to prevent hanging on directories with too many files
|
|
||||||
local iteration_count=0
|
local iteration_count=0
|
||||||
for match in $item_path; do
|
for match in $item_path; do
|
||||||
[[ -e "$match" ]] || continue
|
[[ -e "$match" ]] || continue
|
||||||
# Safety: limit iterations to prevent infinite loops on massive directories
|
|
||||||
((iteration_count++))
|
((iteration_count++))
|
||||||
if [[ $iteration_count -gt $MOLE_MAX_ORPHAN_ITERATIONS ]]; then
|
if [[ $iteration_count -gt $MOLE_MAX_ORPHAN_ITERATIONS ]]; then
|
||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
# Extract bundle ID from filename
|
|
||||||
local bundle_id=$(basename "$match")
|
local bundle_id=$(basename "$match")
|
||||||
bundle_id="${bundle_id%.savedState}"
|
bundle_id="${bundle_id%.savedState}"
|
||||||
bundle_id="${bundle_id%.binarycookies}"
|
bundle_id="${bundle_id%.binarycookies}"
|
||||||
if is_bundle_orphaned "$bundle_id" "$match" "$installed_bundles"; then
|
if is_bundle_orphaned "$bundle_id" "$match" "$installed_bundles"; then
|
||||||
# Use timeout to prevent du from hanging on network mounts or problematic paths
|
|
||||||
local size_kb
|
local size_kb
|
||||||
size_kb=$(get_path_size_kb "$match")
|
size_kb=$(get_path_size_kb "$match")
|
||||||
if [[ -z "$size_kb" || "$size_kb" == "0" ]]; then
|
if [[ -z "$size_kb" || "$size_kb" == "0" ]]; then
|
||||||
|
|||||||
@@ -4,13 +4,11 @@
|
|||||||
# Skips if run within 7 days, runs cleanup/autoremove in parallel with 120s timeout
|
# Skips if run within 7 days, runs cleanup/autoremove in parallel with 120s timeout
|
||||||
clean_homebrew() {
|
clean_homebrew() {
|
||||||
command -v brew > /dev/null 2>&1 || return 0
|
command -v brew > /dev/null 2>&1 || return 0
|
||||||
# Dry run mode - just indicate what would happen
|
|
||||||
if [[ "${DRY_RUN:-false}" == "true" ]]; then
|
if [[ "${DRY_RUN:-false}" == "true" ]]; then
|
||||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} Homebrew · would cleanup and autoremove"
|
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} Homebrew · would cleanup and autoremove"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# Smart caching: check if brew cleanup was run recently (within 7 days)
|
# Skip if cleaned recently to avoid repeated heavy operations.
|
||||||
# Extended from 2 days to 7 days to reduce cleanup frequency
|
|
||||||
local brew_cache_file="${HOME}/.cache/mole/brew_last_cleanup"
|
local brew_cache_file="${HOME}/.cache/mole/brew_last_cleanup"
|
||||||
local cache_valid_days=7
|
local cache_valid_days=7
|
||||||
local should_skip=false
|
local should_skip=false
|
||||||
@@ -27,20 +25,17 @@ clean_homebrew() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
[[ "$should_skip" == "true" ]] && return 0
|
[[ "$should_skip" == "true" ]] && return 0
|
||||||
# Quick pre-check: determine if cleanup is needed based on cache size (<50MB)
|
# Skip cleanup if cache is small; still run autoremove.
|
||||||
# Use timeout to prevent slow du on very large caches
|
|
||||||
# If timeout occurs, assume cache is large and run cleanup
|
|
||||||
local skip_cleanup=false
|
local skip_cleanup=false
|
||||||
local brew_cache_size=0
|
local brew_cache_size=0
|
||||||
if [[ -d ~/Library/Caches/Homebrew ]]; then
|
if [[ -d ~/Library/Caches/Homebrew ]]; then
|
||||||
brew_cache_size=$(run_with_timeout 3 du -sk ~/Library/Caches/Homebrew 2> /dev/null | awk '{print $1}')
|
brew_cache_size=$(run_with_timeout 3 du -sk ~/Library/Caches/Homebrew 2> /dev/null | awk '{print $1}')
|
||||||
local du_exit=$?
|
local du_exit=$?
|
||||||
# Skip cleanup (but still run autoremove) if cache is small
|
|
||||||
if [[ $du_exit -eq 0 && -n "$brew_cache_size" && "$brew_cache_size" -lt 51200 ]]; then
|
if [[ $du_exit -eq 0 && -n "$brew_cache_size" && "$brew_cache_size" -lt 51200 ]]; then
|
||||||
skip_cleanup=true
|
skip_cleanup=true
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
# Display appropriate spinner message
|
# Spinner reflects whether cleanup is skipped.
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
if [[ "$skip_cleanup" == "true" ]]; then
|
if [[ "$skip_cleanup" == "true" ]]; then
|
||||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Homebrew autoremove (cleanup skipped)..."
|
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Homebrew autoremove (cleanup skipped)..."
|
||||||
@@ -48,8 +43,8 @@ clean_homebrew() {
|
|||||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Homebrew cleanup and autoremove..."
|
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Homebrew cleanup and autoremove..."
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
# Run cleanup/autoremove in parallel with a timeout guard.
|
||||||
local timeout_seconds=${MO_BREW_TIMEOUT:-120}
|
local timeout_seconds=${MO_BREW_TIMEOUT:-120}
|
||||||
# Run brew cleanup and/or autoremove based on cache size
|
|
||||||
local brew_tmp_file autoremove_tmp_file
|
local brew_tmp_file autoremove_tmp_file
|
||||||
local brew_pid autoremove_pid
|
local brew_pid autoremove_pid
|
||||||
if [[ "$skip_cleanup" == "false" ]]; then
|
if [[ "$skip_cleanup" == "false" ]]; then
|
||||||
@@ -63,9 +58,7 @@ clean_homebrew() {
|
|||||||
local elapsed=0
|
local elapsed=0
|
||||||
local brew_done=false
|
local brew_done=false
|
||||||
local autoremove_done=false
|
local autoremove_done=false
|
||||||
# Mark cleanup as done if it was skipped
|
|
||||||
[[ "$skip_cleanup" == "true" ]] && brew_done=true
|
[[ "$skip_cleanup" == "true" ]] && brew_done=true
|
||||||
# Wait for both to complete or timeout
|
|
||||||
while [[ "$brew_done" == "false" ]] || [[ "$autoremove_done" == "false" ]]; do
|
while [[ "$brew_done" == "false" ]] || [[ "$autoremove_done" == "false" ]]; do
|
||||||
if [[ $elapsed -ge $timeout_seconds ]]; then
|
if [[ $elapsed -ge $timeout_seconds ]]; then
|
||||||
[[ -n "$brew_pid" ]] && kill -TERM $brew_pid 2> /dev/null || true
|
[[ -n "$brew_pid" ]] && kill -TERM $brew_pid 2> /dev/null || true
|
||||||
@@ -77,7 +70,6 @@ clean_homebrew() {
|
|||||||
sleep 1
|
sleep 1
|
||||||
((elapsed++))
|
((elapsed++))
|
||||||
done
|
done
|
||||||
# Wait for processes to finish
|
|
||||||
local brew_success=false
|
local brew_success=false
|
||||||
if [[ "$skip_cleanup" == "false" && -n "$brew_pid" ]]; then
|
if [[ "$skip_cleanup" == "false" && -n "$brew_pid" ]]; then
|
||||||
if wait $brew_pid 2> /dev/null; then
|
if wait $brew_pid 2> /dev/null; then
|
||||||
@@ -90,6 +82,7 @@ clean_homebrew() {
|
|||||||
fi
|
fi
|
||||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||||
# Process cleanup output and extract metrics
|
# Process cleanup output and extract metrics
|
||||||
|
# Summarize cleanup results.
|
||||||
if [[ "$skip_cleanup" == "true" ]]; then
|
if [[ "$skip_cleanup" == "true" ]]; then
|
||||||
# Cleanup was skipped due to small cache size
|
# Cleanup was skipped due to small cache size
|
||||||
local size_mb=$((brew_cache_size / 1024))
|
local size_mb=$((brew_cache_size / 1024))
|
||||||
@@ -111,6 +104,7 @@ clean_homebrew() {
|
|||||||
echo -e " ${YELLOW}${ICON_WARNING}${NC} Homebrew cleanup timed out · run ${GRAY}brew cleanup${NC} manually"
|
echo -e " ${YELLOW}${ICON_WARNING}${NC} Homebrew cleanup timed out · run ${GRAY}brew cleanup${NC} manually"
|
||||||
fi
|
fi
|
||||||
# Process autoremove output - only show if packages were removed
|
# Process autoremove output - only show if packages were removed
|
||||||
|
# Only surface autoremove output when packages were removed.
|
||||||
if [[ "$autoremove_success" == "true" && -f "$autoremove_tmp_file" ]]; then
|
if [[ "$autoremove_success" == "true" && -f "$autoremove_tmp_file" ]]; then
|
||||||
local autoremove_output
|
local autoremove_output
|
||||||
autoremove_output=$(cat "$autoremove_tmp_file" 2> /dev/null || echo "")
|
autoremove_output=$(cat "$autoremove_tmp_file" 2> /dev/null || echo "")
|
||||||
@@ -124,6 +118,7 @@ clean_homebrew() {
|
|||||||
fi
|
fi
|
||||||
# Update cache timestamp on successful completion or when cleanup was intelligently skipped
|
# Update cache timestamp on successful completion or when cleanup was intelligently skipped
|
||||||
# This prevents repeated cache size checks within the 7-day window
|
# This prevents repeated cache size checks within the 7-day window
|
||||||
|
# Update cache timestamp when any work succeeded or was intentionally skipped.
|
||||||
if [[ "$skip_cleanup" == "true" ]] || [[ "$brew_success" == "true" ]] || [[ "$autoremove_success" == "true" ]]; then
|
if [[ "$skip_cleanup" == "true" ]] || [[ "$brew_success" == "true" ]] || [[ "$autoremove_success" == "true" ]]; then
|
||||||
ensure_user_file "$brew_cache_file"
|
ensure_user_file "$brew_cache_file"
|
||||||
date +%s > "$brew_cache_file"
|
date +%s > "$brew_cache_file"
|
||||||
|
|||||||
@@ -1,15 +1,11 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Cache Cleanup Module
|
# Cache Cleanup Module
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
# Only runs once (uses ~/.cache/mole/permissions_granted flag)
|
# Preflight TCC prompts once to avoid mid-run interruptions.
|
||||||
# Trigger all TCC permission dialogs upfront to avoid random interruptions
|
|
||||||
check_tcc_permissions() {
|
check_tcc_permissions() {
|
||||||
# Only check in interactive mode
|
|
||||||
[[ -t 1 ]] || return 0
|
[[ -t 1 ]] || return 0
|
||||||
local permission_flag="$HOME/.cache/mole/permissions_granted"
|
local permission_flag="$HOME/.cache/mole/permissions_granted"
|
||||||
# Skip if permissions were already granted
|
|
||||||
[[ -f "$permission_flag" ]] && return 0
|
[[ -f "$permission_flag" ]] && return 0
|
||||||
# Key protected directories that require TCC approval
|
|
||||||
local -a tcc_dirs=(
|
local -a tcc_dirs=(
|
||||||
"$HOME/Library/Caches"
|
"$HOME/Library/Caches"
|
||||||
"$HOME/Library/Logs"
|
"$HOME/Library/Logs"
|
||||||
@@ -17,8 +13,7 @@ check_tcc_permissions() {
|
|||||||
"$HOME/Library/Containers"
|
"$HOME/Library/Containers"
|
||||||
"$HOME/.cache"
|
"$HOME/.cache"
|
||||||
)
|
)
|
||||||
# Quick permission test - if first directory is accessible, likely others are too
|
# Quick permission probe (avoid deep scans).
|
||||||
# Use simple ls test instead of find to avoid triggering permission dialogs prematurely
|
|
||||||
local needs_permission_check=false
|
local needs_permission_check=false
|
||||||
if ! ls "$HOME/Library/Caches" > /dev/null 2>&1; then
|
if ! ls "$HOME/Library/Caches" > /dev/null 2>&1; then
|
||||||
needs_permission_check=true
|
needs_permission_check=true
|
||||||
@@ -32,35 +27,30 @@ check_tcc_permissions() {
|
|||||||
echo -ne "${PURPLE}${ICON_ARROW}${NC} Press ${GREEN}Enter${NC} to continue: "
|
echo -ne "${PURPLE}${ICON_ARROW}${NC} Press ${GREEN}Enter${NC} to continue: "
|
||||||
read -r
|
read -r
|
||||||
MOLE_SPINNER_PREFIX="" start_inline_spinner "Requesting permissions..."
|
MOLE_SPINNER_PREFIX="" start_inline_spinner "Requesting permissions..."
|
||||||
# Trigger all TCC prompts upfront by accessing each directory
|
# Touch each directory to trigger prompts without deep scanning.
|
||||||
# Using find -maxdepth 1 ensures we touch the directory without deep scanning
|
|
||||||
for dir in "${tcc_dirs[@]}"; do
|
for dir in "${tcc_dirs[@]}"; do
|
||||||
[[ -d "$dir" ]] && command find "$dir" -maxdepth 1 -type d > /dev/null 2>&1
|
[[ -d "$dir" ]] && command find "$dir" -maxdepth 1 -type d > /dev/null 2>&1
|
||||||
done
|
done
|
||||||
stop_inline_spinner
|
stop_inline_spinner
|
||||||
echo ""
|
echo ""
|
||||||
fi
|
fi
|
||||||
# Mark permissions as granted (won't prompt again)
|
# Mark as granted to avoid repeat prompts.
|
||||||
ensure_user_file "$permission_flag"
|
ensure_user_file "$permission_flag"
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Args: $1=browser_name, $2=cache_path
|
# Args: $1=browser_name, $2=cache_path
|
||||||
# Clean browser Service Worker cache, protecting web editing tools (capcut, photopea, pixlr)
|
# Clean Service Worker cache while protecting critical web editors.
|
||||||
clean_service_worker_cache() {
|
clean_service_worker_cache() {
|
||||||
local browser_name="$1"
|
local browser_name="$1"
|
||||||
local cache_path="$2"
|
local cache_path="$2"
|
||||||
[[ ! -d "$cache_path" ]] && return 0
|
[[ ! -d "$cache_path" ]] && return 0
|
||||||
local cleaned_size=0
|
local cleaned_size=0
|
||||||
local protected_count=0
|
local protected_count=0
|
||||||
# Find all cache directories and calculate sizes with timeout protection
|
|
||||||
while IFS= read -r cache_dir; do
|
while IFS= read -r cache_dir; do
|
||||||
[[ ! -d "$cache_dir" ]] && continue
|
[[ ! -d "$cache_dir" ]] && continue
|
||||||
# Extract domain from path using regex
|
# Extract a best-effort domain name from cache folder.
|
||||||
# Pattern matches: letters/numbers, hyphens, then dot, then TLD
|
|
||||||
# Example: "abc123_https_example.com_0" → "example.com"
|
|
||||||
local domain=$(basename "$cache_dir" | grep -oE '[a-zA-Z0-9][-a-zA-Z0-9]*\.[a-zA-Z]{2,}' | head -1 || echo "")
|
local domain=$(basename "$cache_dir" | grep -oE '[a-zA-Z0-9][-a-zA-Z0-9]*\.[a-zA-Z]{2,}' | head -1 || echo "")
|
||||||
local size=$(run_with_timeout 5 get_path_size_kb "$cache_dir")
|
local size=$(run_with_timeout 5 get_path_size_kb "$cache_dir")
|
||||||
# Check if domain is protected
|
|
||||||
local is_protected=false
|
local is_protected=false
|
||||||
for protected_domain in "${PROTECTED_SW_DOMAINS[@]}"; do
|
for protected_domain in "${PROTECTED_SW_DOMAINS[@]}"; do
|
||||||
if [[ "$domain" == *"$protected_domain"* ]]; then
|
if [[ "$domain" == *"$protected_domain"* ]]; then
|
||||||
@@ -69,7 +59,6 @@ clean_service_worker_cache() {
|
|||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
# Clean if not protected
|
|
||||||
if [[ "$is_protected" == "false" ]]; then
|
if [[ "$is_protected" == "false" ]]; then
|
||||||
if [[ "$DRY_RUN" != "true" ]]; then
|
if [[ "$DRY_RUN" != "true" ]]; then
|
||||||
safe_remove "$cache_dir" true || true
|
safe_remove "$cache_dir" true || true
|
||||||
@@ -78,7 +67,6 @@ clean_service_worker_cache() {
|
|||||||
fi
|
fi
|
||||||
done < <(run_with_timeout 10 sh -c "find '$cache_path' -type d -depth 2 2> /dev/null || true")
|
done < <(run_with_timeout 10 sh -c "find '$cache_path' -type d -depth 2 2> /dev/null || true")
|
||||||
if [[ $cleaned_size -gt 0 ]]; then
|
if [[ $cleaned_size -gt 0 ]]; then
|
||||||
# Temporarily stop spinner for clean output
|
|
||||||
local spinner_was_running=false
|
local spinner_was_running=false
|
||||||
if [[ -t 1 && -n "${INLINE_SPINNER_PID:-}" ]]; then
|
if [[ -t 1 && -n "${INLINE_SPINNER_PID:-}" ]]; then
|
||||||
stop_inline_spinner
|
stop_inline_spinner
|
||||||
@@ -95,17 +83,15 @@ clean_service_worker_cache() {
|
|||||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} $browser_name Service Worker (would clean ${cleaned_mb}MB, ${protected_count} protected)"
|
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} $browser_name Service Worker (would clean ${cleaned_mb}MB, ${protected_count} protected)"
|
||||||
fi
|
fi
|
||||||
note_activity
|
note_activity
|
||||||
# Restart spinner if it was running
|
|
||||||
if [[ "$spinner_was_running" == "true" ]]; then
|
if [[ "$spinner_was_running" == "true" ]]; then
|
||||||
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning browser Service Worker caches..."
|
MOLE_SPINNER_PREFIX=" " start_inline_spinner "Scanning browser Service Worker caches..."
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Uses maxdepth 3, excludes Library/.Trash/node_modules, 10s timeout per scan
|
# Next.js/Python project caches with tight scan bounds and timeouts.
|
||||||
# Clean Next.js (.next/cache) and Python (__pycache__) build caches
|
|
||||||
clean_project_caches() {
|
clean_project_caches() {
|
||||||
stop_inline_spinner 2> /dev/null || true
|
stop_inline_spinner 2> /dev/null || true
|
||||||
# Quick check: skip if user likely doesn't have development projects
|
# Fast pre-check before scanning the whole home dir.
|
||||||
local has_dev_projects=false
|
local has_dev_projects=false
|
||||||
local -a common_dev_dirs=(
|
local -a common_dev_dirs=(
|
||||||
"$HOME/Code"
|
"$HOME/Code"
|
||||||
@@ -133,8 +119,7 @@ clean_project_caches() {
|
|||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
# If no common dev directories found, perform feature-based detection
|
# Fallback: look for project markers near $HOME.
|
||||||
# Check for project markers in $HOME (node_modules, .git, target, etc.)
|
|
||||||
if [[ "$has_dev_projects" == "false" ]]; then
|
if [[ "$has_dev_projects" == "false" ]]; then
|
||||||
local -a project_markers=(
|
local -a project_markers=(
|
||||||
"node_modules"
|
"node_modules"
|
||||||
@@ -153,7 +138,6 @@ clean_project_caches() {
|
|||||||
spinner_active=true
|
spinner_active=true
|
||||||
fi
|
fi
|
||||||
for marker in "${project_markers[@]}"; do
|
for marker in "${project_markers[@]}"; do
|
||||||
# Quick check with maxdepth 2 and 3s timeout to avoid slow scans
|
|
||||||
if run_with_timeout 3 sh -c "find '$HOME' -maxdepth 2 -name '$marker' -not -path '*/Library/*' -not -path '*/.Trash/*' 2>/dev/null | head -1" | grep -q .; then
|
if run_with_timeout 3 sh -c "find '$HOME' -maxdepth 2 -name '$marker' -not -path '*/Library/*' -not -path '*/.Trash/*' 2>/dev/null | head -1" | grep -q .; then
|
||||||
has_dev_projects=true
|
has_dev_projects=true
|
||||||
break
|
break
|
||||||
@@ -162,7 +146,6 @@ clean_project_caches() {
|
|||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
stop_inline_spinner 2> /dev/null || true
|
stop_inline_spinner 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
# If still no dev projects found, skip scanning
|
|
||||||
[[ "$has_dev_projects" == "false" ]] && return 0
|
[[ "$has_dev_projects" == "false" ]] && return 0
|
||||||
fi
|
fi
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
@@ -174,7 +157,7 @@ clean_project_caches() {
|
|||||||
local pycache_tmp_file
|
local pycache_tmp_file
|
||||||
pycache_tmp_file=$(create_temp_file)
|
pycache_tmp_file=$(create_temp_file)
|
||||||
local find_timeout=10
|
local find_timeout=10
|
||||||
# 1. Start Next.js search
|
# Parallel scans (Next.js and __pycache__).
|
||||||
(
|
(
|
||||||
command find "$HOME" -P -mount -type d -name ".next" -maxdepth 3 \
|
command find "$HOME" -P -mount -type d -name ".next" -maxdepth 3 \
|
||||||
-not -path "*/Library/*" \
|
-not -path "*/Library/*" \
|
||||||
@@ -184,7 +167,6 @@ clean_project_caches() {
|
|||||||
2> /dev/null || true
|
2> /dev/null || true
|
||||||
) > "$nextjs_tmp_file" 2>&1 &
|
) > "$nextjs_tmp_file" 2>&1 &
|
||||||
local next_pid=$!
|
local next_pid=$!
|
||||||
# 2. Start Python search
|
|
||||||
(
|
(
|
||||||
command find "$HOME" -P -mount -type d -name "__pycache__" -maxdepth 3 \
|
command find "$HOME" -P -mount -type d -name "__pycache__" -maxdepth 3 \
|
||||||
-not -path "*/Library/*" \
|
-not -path "*/Library/*" \
|
||||||
@@ -194,7 +176,6 @@ clean_project_caches() {
|
|||||||
2> /dev/null || true
|
2> /dev/null || true
|
||||||
) > "$pycache_tmp_file" 2>&1 &
|
) > "$pycache_tmp_file" 2>&1 &
|
||||||
local py_pid=$!
|
local py_pid=$!
|
||||||
# 3. Wait for both with timeout (using smaller intervals for better responsiveness)
|
|
||||||
local elapsed=0
|
local elapsed=0
|
||||||
local check_interval=0.2 # Check every 200ms instead of 1s for smoother experience
|
local check_interval=0.2 # Check every 200ms instead of 1s for smoother experience
|
||||||
while [[ $(echo "$elapsed < $find_timeout" | awk '{print ($1 < $2)}') -eq 1 ]]; do
|
while [[ $(echo "$elapsed < $find_timeout" | awk '{print ($1 < $2)}') -eq 1 ]]; do
|
||||||
@@ -204,12 +185,10 @@ clean_project_caches() {
|
|||||||
sleep $check_interval
|
sleep $check_interval
|
||||||
elapsed=$(echo "$elapsed + $check_interval" | awk '{print $1 + $2}')
|
elapsed=$(echo "$elapsed + $check_interval" | awk '{print $1 + $2}')
|
||||||
done
|
done
|
||||||
# 4. Clean up any stuck processes
|
# Kill stuck scans after timeout.
|
||||||
for pid in $next_pid $py_pid; do
|
for pid in $next_pid $py_pid; do
|
||||||
if kill -0 "$pid" 2> /dev/null; then
|
if kill -0 "$pid" 2> /dev/null; then
|
||||||
# Send TERM signal first
|
|
||||||
kill -TERM "$pid" 2> /dev/null || true
|
kill -TERM "$pid" 2> /dev/null || true
|
||||||
# Wait up to 2 seconds for graceful termination
|
|
||||||
local grace_period=0
|
local grace_period=0
|
||||||
while [[ $grace_period -lt 20 ]]; do
|
while [[ $grace_period -lt 20 ]]; do
|
||||||
if ! kill -0 "$pid" 2> /dev/null; then
|
if ! kill -0 "$pid" 2> /dev/null; then
|
||||||
@@ -218,11 +197,9 @@ clean_project_caches() {
|
|||||||
sleep 0.1
|
sleep 0.1
|
||||||
((grace_period++))
|
((grace_period++))
|
||||||
done
|
done
|
||||||
# Force kill if still running
|
|
||||||
if kill -0 "$pid" 2> /dev/null; then
|
if kill -0 "$pid" 2> /dev/null; then
|
||||||
kill -KILL "$pid" 2> /dev/null || true
|
kill -KILL "$pid" 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
# Final wait (should be instant now)
|
|
||||||
wait "$pid" 2> /dev/null || true
|
wait "$pid" 2> /dev/null || true
|
||||||
else
|
else
|
||||||
wait "$pid" 2> /dev/null || true
|
wait "$pid" 2> /dev/null || true
|
||||||
@@ -231,11 +208,9 @@ clean_project_caches() {
|
|||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
stop_inline_spinner
|
stop_inline_spinner
|
||||||
fi
|
fi
|
||||||
# 5. Process Next.js results
|
|
||||||
while IFS= read -r next_dir; do
|
while IFS= read -r next_dir; do
|
||||||
[[ -d "$next_dir/cache" ]] && safe_clean "$next_dir/cache"/* "Next.js build cache" || true
|
[[ -d "$next_dir/cache" ]] && safe_clean "$next_dir/cache"/* "Next.js build cache" || true
|
||||||
done < "$nextjs_tmp_file"
|
done < "$nextjs_tmp_file"
|
||||||
# 6. Process Python results
|
|
||||||
while IFS= read -r pycache; do
|
while IFS= read -r pycache; do
|
||||||
[[ -d "$pycache" ]] && safe_clean "$pycache"/* "Python bytecode cache" || true
|
[[ -d "$pycache" ]] && safe_clean "$pycache"/* "Python bytecode cache" || true
|
||||||
done < "$pycache_tmp_file"
|
done < "$pycache_tmp_file"
|
||||||
|
|||||||
@@ -1,11 +1,7 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Developer Tools Cleanup Module
|
# Developer Tools Cleanup Module
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
# Helper function to clean tool caches using their built-in commands
|
# Tool cache helper (respects DRY_RUN).
|
||||||
# Args: $1 - description, $@ - command to execute
|
|
||||||
# Env: DRY_RUN
|
|
||||||
# so we just report the action if we can't easily find a path)
|
|
||||||
# Note: Try to estimate potential savings (many tool caches don't have a direct path,
|
|
||||||
clean_tool_cache() {
|
clean_tool_cache() {
|
||||||
local description="$1"
|
local description="$1"
|
||||||
shift
|
shift
|
||||||
@@ -18,50 +14,38 @@ clean_tool_cache() {
|
|||||||
fi
|
fi
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Clean npm cache (command + directories)
|
# npm/pnpm/yarn/bun caches.
|
||||||
# Env: DRY_RUN
|
|
||||||
# npm cache clean clears official npm cache, safe_clean handles alternative package managers
|
|
||||||
clean_dev_npm() {
|
clean_dev_npm() {
|
||||||
if command -v npm > /dev/null 2>&1; then
|
if command -v npm > /dev/null 2>&1; then
|
||||||
# clean_tool_cache now calculates size before cleanup for better statistics
|
|
||||||
clean_tool_cache "npm cache" npm cache clean --force
|
clean_tool_cache "npm cache" npm cache clean --force
|
||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
# Clean pnpm store cache
|
# Clean pnpm store cache
|
||||||
local pnpm_default_store=~/Library/pnpm/store
|
local pnpm_default_store=~/Library/pnpm/store
|
||||||
if command -v pnpm > /dev/null 2>&1; then
|
if command -v pnpm > /dev/null 2>&1; then
|
||||||
# Use pnpm's built-in prune command
|
|
||||||
clean_tool_cache "pnpm cache" pnpm store prune
|
clean_tool_cache "pnpm cache" pnpm store prune
|
||||||
# Get the actual store path to check if default is orphaned
|
|
||||||
local pnpm_store_path
|
local pnpm_store_path
|
||||||
start_section_spinner "Checking store path..."
|
start_section_spinner "Checking store path..."
|
||||||
pnpm_store_path=$(run_with_timeout 2 pnpm store path 2> /dev/null) || pnpm_store_path=""
|
pnpm_store_path=$(run_with_timeout 2 pnpm store path 2> /dev/null) || pnpm_store_path=""
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
# If store path is different from default, clean the orphaned default
|
|
||||||
if [[ -n "$pnpm_store_path" && "$pnpm_store_path" != "$pnpm_default_store" ]]; then
|
if [[ -n "$pnpm_store_path" && "$pnpm_store_path" != "$pnpm_default_store" ]]; then
|
||||||
safe_clean "$pnpm_default_store"/* "Orphaned pnpm store"
|
safe_clean "$pnpm_default_store"/* "Orphaned pnpm store"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# pnpm not installed, clean default location
|
|
||||||
safe_clean "$pnpm_default_store"/* "pnpm store"
|
safe_clean "$pnpm_default_store"/* "pnpm store"
|
||||||
fi
|
fi
|
||||||
note_activity
|
note_activity
|
||||||
# Clean alternative package manager caches
|
|
||||||
safe_clean ~/.tnpm/_cacache/* "tnpm cache directory"
|
safe_clean ~/.tnpm/_cacache/* "tnpm cache directory"
|
||||||
safe_clean ~/.tnpm/_logs/* "tnpm logs"
|
safe_clean ~/.tnpm/_logs/* "tnpm logs"
|
||||||
safe_clean ~/.yarn/cache/* "Yarn cache"
|
safe_clean ~/.yarn/cache/* "Yarn cache"
|
||||||
safe_clean ~/.bun/install/cache/* "Bun cache"
|
safe_clean ~/.bun/install/cache/* "Bun cache"
|
||||||
}
|
}
|
||||||
# Clean Python/pip cache (command + directories)
|
# Python/pip ecosystem caches.
|
||||||
# Env: DRY_RUN
|
|
||||||
# pip cache purge clears official pip cache, safe_clean handles other Python tools
|
|
||||||
clean_dev_python() {
|
clean_dev_python() {
|
||||||
if command -v pip3 > /dev/null 2>&1; then
|
if command -v pip3 > /dev/null 2>&1; then
|
||||||
# clean_tool_cache now calculates size before cleanup for better statistics
|
|
||||||
clean_tool_cache "pip cache" bash -c 'pip3 cache purge >/dev/null 2>&1 || true'
|
clean_tool_cache "pip cache" bash -c 'pip3 cache purge >/dev/null 2>&1 || true'
|
||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
# Clean Python ecosystem caches
|
|
||||||
safe_clean ~/.pyenv/cache/* "pyenv cache"
|
safe_clean ~/.pyenv/cache/* "pyenv cache"
|
||||||
safe_clean ~/.cache/poetry/* "Poetry cache"
|
safe_clean ~/.cache/poetry/* "Poetry cache"
|
||||||
safe_clean ~/.cache/uv/* "uv cache"
|
safe_clean ~/.cache/uv/* "uv cache"
|
||||||
@@ -76,28 +60,23 @@ clean_dev_python() {
|
|||||||
safe_clean ~/anaconda3/pkgs/* "Anaconda packages cache"
|
safe_clean ~/anaconda3/pkgs/* "Anaconda packages cache"
|
||||||
safe_clean ~/.cache/wandb/* "Weights & Biases cache"
|
safe_clean ~/.cache/wandb/* "Weights & Biases cache"
|
||||||
}
|
}
|
||||||
# Clean Go cache (command + directories)
|
# Go build/module caches.
|
||||||
# Env: DRY_RUN
|
|
||||||
# go clean handles build and module caches comprehensively
|
|
||||||
clean_dev_go() {
|
clean_dev_go() {
|
||||||
if command -v go > /dev/null 2>&1; then
|
if command -v go > /dev/null 2>&1; then
|
||||||
# clean_tool_cache now calculates size before cleanup for better statistics
|
|
||||||
clean_tool_cache "Go cache" bash -c 'go clean -modcache >/dev/null 2>&1 || true; go clean -cache >/dev/null 2>&1 || true'
|
clean_tool_cache "Go cache" bash -c 'go clean -modcache >/dev/null 2>&1 || true; go clean -cache >/dev/null 2>&1 || true'
|
||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean Rust/cargo cache directories
|
# Rust/cargo caches.
|
||||||
clean_dev_rust() {
|
clean_dev_rust() {
|
||||||
safe_clean ~/.cargo/registry/cache/* "Rust cargo cache"
|
safe_clean ~/.cargo/registry/cache/* "Rust cargo cache"
|
||||||
safe_clean ~/.cargo/git/* "Cargo git cache"
|
safe_clean ~/.cargo/git/* "Cargo git cache"
|
||||||
safe_clean ~/.rustup/downloads/* "Rust downloads cache"
|
safe_clean ~/.rustup/downloads/* "Rust downloads cache"
|
||||||
}
|
}
|
||||||
# Env: DRY_RUN
|
# Docker caches (guarded by daemon check).
|
||||||
# Clean Docker cache (command + directories)
|
|
||||||
clean_dev_docker() {
|
clean_dev_docker() {
|
||||||
if command -v docker > /dev/null 2>&1; then
|
if command -v docker > /dev/null 2>&1; then
|
||||||
if [[ "$DRY_RUN" != "true" ]]; then
|
if [[ "$DRY_RUN" != "true" ]]; then
|
||||||
# Check if Docker daemon is running (with timeout to prevent hanging)
|
|
||||||
start_section_spinner "Checking Docker daemon..."
|
start_section_spinner "Checking Docker daemon..."
|
||||||
local docker_running=false
|
local docker_running=false
|
||||||
if run_with_timeout 3 docker info > /dev/null 2>&1; then
|
if run_with_timeout 3 docker info > /dev/null 2>&1; then
|
||||||
@@ -107,7 +86,6 @@ clean_dev_docker() {
|
|||||||
if [[ "$docker_running" == "true" ]]; then
|
if [[ "$docker_running" == "true" ]]; then
|
||||||
clean_tool_cache "Docker build cache" docker builder prune -af
|
clean_tool_cache "Docker build cache" docker builder prune -af
|
||||||
else
|
else
|
||||||
# Docker not running - silently skip without user interaction
|
|
||||||
debug_log "Docker daemon not running, skipping Docker cache cleanup"
|
debug_log "Docker daemon not running, skipping Docker cache cleanup"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
@@ -117,8 +95,7 @@ clean_dev_docker() {
|
|||||||
fi
|
fi
|
||||||
safe_clean ~/.docker/buildx/cache/* "Docker BuildX cache"
|
safe_clean ~/.docker/buildx/cache/* "Docker BuildX cache"
|
||||||
}
|
}
|
||||||
# Env: DRY_RUN
|
# Nix garbage collection.
|
||||||
# Clean Nix package manager
|
|
||||||
clean_dev_nix() {
|
clean_dev_nix() {
|
||||||
if command -v nix-collect-garbage > /dev/null 2>&1; then
|
if command -v nix-collect-garbage > /dev/null 2>&1; then
|
||||||
if [[ "$DRY_RUN" != "true" ]]; then
|
if [[ "$DRY_RUN" != "true" ]]; then
|
||||||
@@ -129,7 +106,7 @@ clean_dev_nix() {
|
|||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean cloud CLI tools cache
|
# Cloud CLI caches.
|
||||||
clean_dev_cloud() {
|
clean_dev_cloud() {
|
||||||
safe_clean ~/.kube/cache/* "Kubernetes cache"
|
safe_clean ~/.kube/cache/* "Kubernetes cache"
|
||||||
safe_clean ~/.local/share/containers/storage/tmp/* "Container storage temp"
|
safe_clean ~/.local/share/containers/storage/tmp/* "Container storage temp"
|
||||||
@@ -137,7 +114,7 @@ clean_dev_cloud() {
|
|||||||
safe_clean ~/.config/gcloud/logs/* "Google Cloud logs"
|
safe_clean ~/.config/gcloud/logs/* "Google Cloud logs"
|
||||||
safe_clean ~/.azure/logs/* "Azure CLI logs"
|
safe_clean ~/.azure/logs/* "Azure CLI logs"
|
||||||
}
|
}
|
||||||
# Clean frontend build tool caches
|
# Frontend build caches.
|
||||||
clean_dev_frontend() {
|
clean_dev_frontend() {
|
||||||
safe_clean ~/.cache/typescript/* "TypeScript cache"
|
safe_clean ~/.cache/typescript/* "TypeScript cache"
|
||||||
safe_clean ~/.cache/electron/* "Electron cache"
|
safe_clean ~/.cache/electron/* "Electron cache"
|
||||||
@@ -151,40 +128,29 @@ clean_dev_frontend() {
|
|||||||
safe_clean ~/.cache/eslint/* "ESLint cache"
|
safe_clean ~/.cache/eslint/* "ESLint cache"
|
||||||
safe_clean ~/.cache/prettier/* "Prettier cache"
|
safe_clean ~/.cache/prettier/* "Prettier cache"
|
||||||
}
|
}
|
||||||
# Clean mobile development tools
|
# Mobile dev caches (can be large).
|
||||||
# iOS simulator cleanup can free significant space (70GB+ in some cases)
|
|
||||||
# Simulator runtime caches can grow large over time
|
|
||||||
# DeviceSupport files accumulate for each iOS version connected
|
|
||||||
clean_dev_mobile() {
|
clean_dev_mobile() {
|
||||||
# Clean Xcode unavailable simulators
|
|
||||||
# Removes old and unused local iOS simulator data from old unused runtimes
|
|
||||||
# Can free up significant space (70GB+ in some cases)
|
|
||||||
if command -v xcrun > /dev/null 2>&1; then
|
if command -v xcrun > /dev/null 2>&1; then
|
||||||
debug_log "Checking for unavailable Xcode simulators"
|
debug_log "Checking for unavailable Xcode simulators"
|
||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
clean_tool_cache "Xcode unavailable simulators" xcrun simctl delete unavailable
|
clean_tool_cache "Xcode unavailable simulators" xcrun simctl delete unavailable
|
||||||
else
|
else
|
||||||
start_section_spinner "Checking unavailable simulators..."
|
start_section_spinner "Checking unavailable simulators..."
|
||||||
# Run command manually to control UI output order
|
|
||||||
if xcrun simctl delete unavailable > /dev/null 2>&1; then
|
if xcrun simctl delete unavailable > /dev/null 2>&1; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Xcode unavailable simulators"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Xcode unavailable simulators"
|
||||||
else
|
else
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
# Silently fail or log error if needed, matching clean_tool_cache behavior
|
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
# Clean iOS DeviceSupport - more comprehensive cleanup
|
# DeviceSupport caches/logs (preserve core support files).
|
||||||
# DeviceSupport directories store debug symbols for each iOS version
|
|
||||||
# Safe to clean caches and logs, but preserve device support files themselves
|
|
||||||
safe_clean ~/Library/Developer/Xcode/iOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "iOS device symbol cache"
|
safe_clean ~/Library/Developer/Xcode/iOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "iOS device symbol cache"
|
||||||
safe_clean ~/Library/Developer/Xcode/iOS\ DeviceSupport/*.log "iOS device support logs"
|
safe_clean ~/Library/Developer/Xcode/iOS\ DeviceSupport/*.log "iOS device support logs"
|
||||||
safe_clean ~/Library/Developer/Xcode/watchOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "watchOS device symbol cache"
|
safe_clean ~/Library/Developer/Xcode/watchOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "watchOS device symbol cache"
|
||||||
safe_clean ~/Library/Developer/Xcode/tvOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "tvOS device symbol cache"
|
safe_clean ~/Library/Developer/Xcode/tvOS\ DeviceSupport/*/Symbols/System/Library/Caches/* "tvOS device symbol cache"
|
||||||
# Clean simulator runtime caches
|
# Simulator runtime caches.
|
||||||
# RuntimeRoot caches can accumulate system library caches
|
|
||||||
safe_clean ~/Library/Developer/CoreSimulator/Profiles/Runtimes/*/Contents/Resources/RuntimeRoot/System/Library/Caches/* "Simulator runtime cache"
|
safe_clean ~/Library/Developer/CoreSimulator/Profiles/Runtimes/*/Contents/Resources/RuntimeRoot/System/Library/Caches/* "Simulator runtime cache"
|
||||||
safe_clean ~/Library/Caches/Google/AndroidStudio*/* "Android Studio cache"
|
safe_clean ~/Library/Caches/Google/AndroidStudio*/* "Android Studio cache"
|
||||||
safe_clean ~/Library/Caches/CocoaPods/* "CocoaPods cache"
|
safe_clean ~/Library/Caches/CocoaPods/* "CocoaPods cache"
|
||||||
@@ -194,14 +160,14 @@ clean_dev_mobile() {
|
|||||||
safe_clean ~/Library/Developer/Xcode/UserData/IB\ Support/* "Xcode Interface Builder cache"
|
safe_clean ~/Library/Developer/Xcode/UserData/IB\ Support/* "Xcode Interface Builder cache"
|
||||||
safe_clean ~/.cache/swift-package-manager/* "Swift package manager cache"
|
safe_clean ~/.cache/swift-package-manager/* "Swift package manager cache"
|
||||||
}
|
}
|
||||||
# Clean JVM ecosystem tools
|
# JVM ecosystem caches.
|
||||||
clean_dev_jvm() {
|
clean_dev_jvm() {
|
||||||
safe_clean ~/.gradle/caches/* "Gradle caches"
|
safe_clean ~/.gradle/caches/* "Gradle caches"
|
||||||
safe_clean ~/.gradle/daemon/* "Gradle daemon logs"
|
safe_clean ~/.gradle/daemon/* "Gradle daemon logs"
|
||||||
safe_clean ~/.sbt/* "SBT cache"
|
safe_clean ~/.sbt/* "SBT cache"
|
||||||
safe_clean ~/.ivy2/cache/* "Ivy cache"
|
safe_clean ~/.ivy2/cache/* "Ivy cache"
|
||||||
}
|
}
|
||||||
# Clean other language tools
|
# Other language tool caches.
|
||||||
clean_dev_other_langs() {
|
clean_dev_other_langs() {
|
||||||
safe_clean ~/.bundle/cache/* "Ruby Bundler cache"
|
safe_clean ~/.bundle/cache/* "Ruby Bundler cache"
|
||||||
safe_clean ~/.composer/cache/* "PHP Composer cache"
|
safe_clean ~/.composer/cache/* "PHP Composer cache"
|
||||||
@@ -211,7 +177,7 @@ clean_dev_other_langs() {
|
|||||||
safe_clean ~/.cache/zig/* "Zig cache"
|
safe_clean ~/.cache/zig/* "Zig cache"
|
||||||
safe_clean ~/Library/Caches/deno/* "Deno cache"
|
safe_clean ~/Library/Caches/deno/* "Deno cache"
|
||||||
}
|
}
|
||||||
# Clean CI/CD and DevOps tools
|
# CI/CD and DevOps caches.
|
||||||
clean_dev_cicd() {
|
clean_dev_cicd() {
|
||||||
safe_clean ~/.cache/terraform/* "Terraform cache"
|
safe_clean ~/.cache/terraform/* "Terraform cache"
|
||||||
safe_clean ~/.grafana/cache/* "Grafana cache"
|
safe_clean ~/.grafana/cache/* "Grafana cache"
|
||||||
@@ -222,7 +188,7 @@ clean_dev_cicd() {
|
|||||||
safe_clean ~/.circleci/cache/* "CircleCI cache"
|
safe_clean ~/.circleci/cache/* "CircleCI cache"
|
||||||
safe_clean ~/.sonar/* "SonarQube cache"
|
safe_clean ~/.sonar/* "SonarQube cache"
|
||||||
}
|
}
|
||||||
# Clean database tools
|
# Database tool caches.
|
||||||
clean_dev_database() {
|
clean_dev_database() {
|
||||||
safe_clean ~/Library/Caches/com.sequel-ace.sequel-ace/* "Sequel Ace cache"
|
safe_clean ~/Library/Caches/com.sequel-ace.sequel-ace/* "Sequel Ace cache"
|
||||||
safe_clean ~/Library/Caches/com.eggerapps.Sequel-Pro/* "Sequel Pro cache"
|
safe_clean ~/Library/Caches/com.eggerapps.Sequel-Pro/* "Sequel Pro cache"
|
||||||
@@ -231,7 +197,7 @@ clean_dev_database() {
|
|||||||
safe_clean ~/Library/Caches/com.dbeaver.* "DBeaver cache"
|
safe_clean ~/Library/Caches/com.dbeaver.* "DBeaver cache"
|
||||||
safe_clean ~/Library/Caches/com.redis.RedisInsight "Redis Insight cache"
|
safe_clean ~/Library/Caches/com.redis.RedisInsight "Redis Insight cache"
|
||||||
}
|
}
|
||||||
# Clean API/network debugging tools
|
# API/debugging tool caches.
|
||||||
clean_dev_api_tools() {
|
clean_dev_api_tools() {
|
||||||
safe_clean ~/Library/Caches/com.postmanlabs.mac/* "Postman cache"
|
safe_clean ~/Library/Caches/com.postmanlabs.mac/* "Postman cache"
|
||||||
safe_clean ~/Library/Caches/com.konghq.insomnia/* "Insomnia cache"
|
safe_clean ~/Library/Caches/com.konghq.insomnia/* "Insomnia cache"
|
||||||
@@ -240,7 +206,7 @@ clean_dev_api_tools() {
|
|||||||
safe_clean ~/Library/Caches/com.charlesproxy.charles/* "Charles Proxy cache"
|
safe_clean ~/Library/Caches/com.charlesproxy.charles/* "Charles Proxy cache"
|
||||||
safe_clean ~/Library/Caches/com.proxyman.NSProxy/* "Proxyman cache"
|
safe_clean ~/Library/Caches/com.proxyman.NSProxy/* "Proxyman cache"
|
||||||
}
|
}
|
||||||
# Clean misc dev tools
|
# Misc dev tool caches.
|
||||||
clean_dev_misc() {
|
clean_dev_misc() {
|
||||||
safe_clean ~/Library/Caches/com.unity3d.*/* "Unity cache"
|
safe_clean ~/Library/Caches/com.unity3d.*/* "Unity cache"
|
||||||
safe_clean ~/Library/Caches/com.mongodb.compass/* "MongoDB Compass cache"
|
safe_clean ~/Library/Caches/com.mongodb.compass/* "MongoDB Compass cache"
|
||||||
@@ -250,7 +216,7 @@ clean_dev_misc() {
|
|||||||
safe_clean ~/Library/Caches/KSCrash/* "KSCrash reports"
|
safe_clean ~/Library/Caches/KSCrash/* "KSCrash reports"
|
||||||
safe_clean ~/Library/Caches/com.crashlytics.data/* "Crashlytics data"
|
safe_clean ~/Library/Caches/com.crashlytics.data/* "Crashlytics data"
|
||||||
}
|
}
|
||||||
# Clean shell and version control
|
# Shell and VCS leftovers.
|
||||||
clean_dev_shell() {
|
clean_dev_shell() {
|
||||||
safe_clean ~/.gitconfig.lock "Git config lock"
|
safe_clean ~/.gitconfig.lock "Git config lock"
|
||||||
safe_clean ~/.gitconfig.bak* "Git config backup"
|
safe_clean ~/.gitconfig.bak* "Git config backup"
|
||||||
@@ -260,28 +226,20 @@ clean_dev_shell() {
|
|||||||
safe_clean ~/.zsh_history.bak* "Zsh history backup"
|
safe_clean ~/.zsh_history.bak* "Zsh history backup"
|
||||||
safe_clean ~/.cache/pre-commit/* "pre-commit cache"
|
safe_clean ~/.cache/pre-commit/* "pre-commit cache"
|
||||||
}
|
}
|
||||||
# Clean network utilities
|
# Network tool caches.
|
||||||
clean_dev_network() {
|
clean_dev_network() {
|
||||||
safe_clean ~/.cache/curl/* "curl cache"
|
safe_clean ~/.cache/curl/* "curl cache"
|
||||||
safe_clean ~/.cache/wget/* "wget cache"
|
safe_clean ~/.cache/wget/* "wget cache"
|
||||||
safe_clean ~/Library/Caches/curl/* "macOS curl cache"
|
safe_clean ~/Library/Caches/curl/* "macOS curl cache"
|
||||||
safe_clean ~/Library/Caches/wget/* "macOS wget cache"
|
safe_clean ~/Library/Caches/wget/* "macOS wget cache"
|
||||||
}
|
}
|
||||||
# Clean orphaned SQLite temporary files (-shm and -wal files)
|
# Orphaned SQLite temp files (-shm/-wal). Disabled due to low ROI.
|
||||||
# Strategy: Only clean truly orphaned temp files where base database is missing
|
|
||||||
# Env: DRY_RUN
|
|
||||||
# This is fast and safe - skip complex checks for files with existing base DB
|
|
||||||
clean_sqlite_temp_files() {
|
clean_sqlite_temp_files() {
|
||||||
# Skip this cleanup due to low ROI (收益比低,经常没东西可清理)
|
|
||||||
# Find scan is still slow even optimized, and orphaned files are rare
|
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Main developer tools cleanup function
|
# Main developer tools cleanup sequence.
|
||||||
# Env: DRY_RUN
|
|
||||||
# Calls all specialized cleanup functions
|
|
||||||
clean_developer_tools() {
|
clean_developer_tools() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
# Clean SQLite temporary files first
|
|
||||||
clean_sqlite_temp_files
|
clean_sqlite_temp_files
|
||||||
clean_dev_npm
|
clean_dev_npm
|
||||||
clean_dev_python
|
clean_dev_python
|
||||||
@@ -292,7 +250,6 @@ clean_developer_tools() {
|
|||||||
clean_dev_nix
|
clean_dev_nix
|
||||||
clean_dev_shell
|
clean_dev_shell
|
||||||
clean_dev_frontend
|
clean_dev_frontend
|
||||||
# Project build caches (delegated to clean_caches module)
|
|
||||||
clean_project_caches
|
clean_project_caches
|
||||||
clean_dev_mobile
|
clean_dev_mobile
|
||||||
clean_dev_jvm
|
clean_dev_jvm
|
||||||
@@ -302,22 +259,17 @@ clean_developer_tools() {
|
|||||||
clean_dev_api_tools
|
clean_dev_api_tools
|
||||||
clean_dev_network
|
clean_dev_network
|
||||||
clean_dev_misc
|
clean_dev_misc
|
||||||
# Homebrew caches and cleanup (delegated to clean_brew module)
|
|
||||||
safe_clean ~/Library/Caches/Homebrew/* "Homebrew cache"
|
safe_clean ~/Library/Caches/Homebrew/* "Homebrew cache"
|
||||||
# Clean Homebrew locks intelligently (avoid repeated sudo prompts)
|
# Clean Homebrew locks without repeated sudo prompts.
|
||||||
local brew_lock_dirs=(
|
local brew_lock_dirs=(
|
||||||
"/opt/homebrew/var/homebrew/locks"
|
"/opt/homebrew/var/homebrew/locks"
|
||||||
"/usr/local/var/homebrew/locks"
|
"/usr/local/var/homebrew/locks"
|
||||||
)
|
)
|
||||||
for lock_dir in "${brew_lock_dirs[@]}"; do
|
for lock_dir in "${brew_lock_dirs[@]}"; do
|
||||||
if [[ -d "$lock_dir" && -w "$lock_dir" ]]; then
|
if [[ -d "$lock_dir" && -w "$lock_dir" ]]; then
|
||||||
# User can write, safe to clean
|
|
||||||
safe_clean "$lock_dir"/* "Homebrew lock files"
|
safe_clean "$lock_dir"/* "Homebrew lock files"
|
||||||
elif [[ -d "$lock_dir" ]]; then
|
elif [[ -d "$lock_dir" ]]; then
|
||||||
# Directory exists but not writable. Check if empty to avoid noise.
|
|
||||||
if find "$lock_dir" -mindepth 1 -maxdepth 1 -print -quit 2> /dev/null | grep -q .; then
|
if find "$lock_dir" -mindepth 1 -maxdepth 1 -print -quit 2> /dev/null | grep -q .; then
|
||||||
# Only try sudo ONCE if we really need to, or just skip to avoid spam
|
|
||||||
# Decision: Skip strict system/root owned locks to avoid nag.
|
|
||||||
debug_log "Skipping read-only Homebrew locks in $lock_dir"
|
debug_log "Skipping read-only Homebrew locks in $lock_dir"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Project Purge Module (mo purge)
|
# Project Purge Module (mo purge).
|
||||||
# Removes heavy project build artifacts and dependencies
|
# Removes heavy project build artifacts and dependencies.
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
PROJECT_LIB_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
PROJECT_LIB_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
@@ -10,7 +10,7 @@ if ! command -v ensure_user_dir > /dev/null 2>&1; then
|
|||||||
source "$CORE_LIB_DIR/common.sh"
|
source "$CORE_LIB_DIR/common.sh"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Targets to look for (heavy build artifacts)
|
# Targets to look for (heavy build artifacts).
|
||||||
readonly PURGE_TARGETS=(
|
readonly PURGE_TARGETS=(
|
||||||
"node_modules"
|
"node_modules"
|
||||||
"target" # Rust, Maven
|
"target" # Rust, Maven
|
||||||
@@ -29,12 +29,12 @@ readonly PURGE_TARGETS=(
|
|||||||
".parcel-cache" # Parcel bundler
|
".parcel-cache" # Parcel bundler
|
||||||
".dart_tool" # Flutter/Dart build cache
|
".dart_tool" # Flutter/Dart build cache
|
||||||
)
|
)
|
||||||
# Minimum age in days before considering for cleanup
|
# Minimum age in days before considering for cleanup.
|
||||||
readonly MIN_AGE_DAYS=7
|
readonly MIN_AGE_DAYS=7
|
||||||
# Scan depth defaults (relative to search root)
|
# Scan depth defaults (relative to search root).
|
||||||
readonly PURGE_MIN_DEPTH_DEFAULT=2
|
readonly PURGE_MIN_DEPTH_DEFAULT=2
|
||||||
readonly PURGE_MAX_DEPTH_DEFAULT=8
|
readonly PURGE_MAX_DEPTH_DEFAULT=8
|
||||||
# Search paths (default, can be overridden via config file)
|
# Search paths (default, can be overridden via config file).
|
||||||
readonly DEFAULT_PURGE_SEARCH_PATHS=(
|
readonly DEFAULT_PURGE_SEARCH_PATHS=(
|
||||||
"$HOME/www"
|
"$HOME/www"
|
||||||
"$HOME/dev"
|
"$HOME/dev"
|
||||||
@@ -46,13 +46,13 @@ readonly DEFAULT_PURGE_SEARCH_PATHS=(
|
|||||||
"$HOME/Development"
|
"$HOME/Development"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Config file for custom purge paths
|
# Config file for custom purge paths.
|
||||||
readonly PURGE_CONFIG_FILE="$HOME/.config/mole/purge_paths"
|
readonly PURGE_CONFIG_FILE="$HOME/.config/mole/purge_paths"
|
||||||
|
|
||||||
# Global array to hold actual search paths
|
# Resolved search paths.
|
||||||
PURGE_SEARCH_PATHS=()
|
PURGE_SEARCH_PATHS=()
|
||||||
|
|
||||||
# Project indicator files (if a directory contains these, it's likely a project)
|
# Project indicators for container detection.
|
||||||
readonly PROJECT_INDICATORS=(
|
readonly PROJECT_INDICATORS=(
|
||||||
"package.json"
|
"package.json"
|
||||||
"Cargo.toml"
|
"Cargo.toml"
|
||||||
@@ -68,12 +68,12 @@ readonly PROJECT_INDICATORS=(
|
|||||||
".git"
|
".git"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Check if a directory contains projects (directly or in subdirectories)
|
# Check if a directory contains projects (directly or in subdirectories).
|
||||||
is_project_container() {
|
is_project_container() {
|
||||||
local dir="$1"
|
local dir="$1"
|
||||||
local max_depth="${2:-2}"
|
local max_depth="${2:-2}"
|
||||||
|
|
||||||
# Skip hidden directories and system directories
|
# Skip hidden/system directories.
|
||||||
local basename
|
local basename
|
||||||
basename=$(basename "$dir")
|
basename=$(basename "$dir")
|
||||||
[[ "$basename" == .* ]] && return 1
|
[[ "$basename" == .* ]] && return 1
|
||||||
@@ -84,7 +84,7 @@ is_project_container() {
|
|||||||
[[ "$basename" == "Pictures" ]] && return 1
|
[[ "$basename" == "Pictures" ]] && return 1
|
||||||
[[ "$basename" == "Public" ]] && return 1
|
[[ "$basename" == "Public" ]] && return 1
|
||||||
|
|
||||||
# Build find expression with all indicators (single find call for efficiency)
|
# Single find expression for indicators.
|
||||||
local -a find_args=("$dir" "-maxdepth" "$max_depth" "(")
|
local -a find_args=("$dir" "-maxdepth" "$max_depth" "(")
|
||||||
local first=true
|
local first=true
|
||||||
for indicator in "${PROJECT_INDICATORS[@]}"; do
|
for indicator in "${PROJECT_INDICATORS[@]}"; do
|
||||||
@@ -97,7 +97,6 @@ is_project_container() {
|
|||||||
done
|
done
|
||||||
find_args+=(")" "-print" "-quit")
|
find_args+=(")" "-print" "-quit")
|
||||||
|
|
||||||
# Single find call to check all indicators at once
|
|
||||||
if find "${find_args[@]}" 2> /dev/null | grep -q .; then
|
if find "${find_args[@]}" 2> /dev/null | grep -q .; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
@@ -105,24 +104,22 @@ is_project_container() {
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Discover project directories in $HOME
|
# Discover project directories in $HOME.
|
||||||
discover_project_dirs() {
|
discover_project_dirs() {
|
||||||
local -a discovered=()
|
local -a discovered=()
|
||||||
|
|
||||||
# First check default paths that exist
|
|
||||||
for path in "${DEFAULT_PURGE_SEARCH_PATHS[@]}"; do
|
for path in "${DEFAULT_PURGE_SEARCH_PATHS[@]}"; do
|
||||||
if [[ -d "$path" ]]; then
|
if [[ -d "$path" ]]; then
|
||||||
discovered+=("$path")
|
discovered+=("$path")
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
# Then scan $HOME for other project containers (depth 1)
|
# Scan $HOME for other containers (depth 1).
|
||||||
local dir
|
local dir
|
||||||
for dir in "$HOME"/*/; do
|
for dir in "$HOME"/*/; do
|
||||||
[[ ! -d "$dir" ]] && continue
|
[[ ! -d "$dir" ]] && continue
|
||||||
dir="${dir%/}" # Remove trailing slash
|
dir="${dir%/}" # Remove trailing slash
|
||||||
|
|
||||||
# Skip if already in defaults
|
|
||||||
local already_found=false
|
local already_found=false
|
||||||
for existing in "${DEFAULT_PURGE_SEARCH_PATHS[@]}"; do
|
for existing in "${DEFAULT_PURGE_SEARCH_PATHS[@]}"; do
|
||||||
if [[ "$dir" == "$existing" ]]; then
|
if [[ "$dir" == "$existing" ]]; then
|
||||||
@@ -132,17 +129,15 @@ discover_project_dirs() {
|
|||||||
done
|
done
|
||||||
[[ "$already_found" == "true" ]] && continue
|
[[ "$already_found" == "true" ]] && continue
|
||||||
|
|
||||||
# Check if this directory contains projects
|
|
||||||
if is_project_container "$dir" 2; then
|
if is_project_container "$dir" 2; then
|
||||||
discovered+=("$dir")
|
discovered+=("$dir")
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
# Return unique paths
|
|
||||||
printf '%s\n' "${discovered[@]}" | sort -u
|
printf '%s\n' "${discovered[@]}" | sort -u
|
||||||
}
|
}
|
||||||
|
|
||||||
# Save discovered paths to config
|
# Save discovered paths to config.
|
||||||
save_discovered_paths() {
|
save_discovered_paths() {
|
||||||
local -a paths=("$@")
|
local -a paths=("$@")
|
||||||
|
|
||||||
@@ -166,26 +161,20 @@ EOF
|
|||||||
load_purge_config() {
|
load_purge_config() {
|
||||||
PURGE_SEARCH_PATHS=()
|
PURGE_SEARCH_PATHS=()
|
||||||
|
|
||||||
# Try loading from config file
|
|
||||||
if [[ -f "$PURGE_CONFIG_FILE" ]]; then
|
if [[ -f "$PURGE_CONFIG_FILE" ]]; then
|
||||||
while IFS= read -r line; do
|
while IFS= read -r line; do
|
||||||
# Remove leading/trailing whitespace
|
|
||||||
line="${line#"${line%%[![:space:]]*}"}"
|
line="${line#"${line%%[![:space:]]*}"}"
|
||||||
line="${line%"${line##*[![:space:]]}"}"
|
line="${line%"${line##*[![:space:]]}"}"
|
||||||
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^# ]] && continue
|
[[ -z "$line" || "$line" =~ ^# ]] && continue
|
||||||
|
|
||||||
# Expand tilde to HOME
|
|
||||||
line="${line/#\~/$HOME}"
|
line="${line/#\~/$HOME}"
|
||||||
|
|
||||||
PURGE_SEARCH_PATHS+=("$line")
|
PURGE_SEARCH_PATHS+=("$line")
|
||||||
done < "$PURGE_CONFIG_FILE"
|
done < "$PURGE_CONFIG_FILE"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# If no paths loaded, auto-discover and save
|
|
||||||
if [[ ${#PURGE_SEARCH_PATHS[@]} -eq 0 ]]; then
|
if [[ ${#PURGE_SEARCH_PATHS[@]} -eq 0 ]]; then
|
||||||
# Show discovery message if running interactively
|
|
||||||
if [[ -t 1 ]] && [[ -z "${_PURGE_DISCOVERY_SILENT:-}" ]]; then
|
if [[ -t 1 ]] && [[ -z "${_PURGE_DISCOVERY_SILENT:-}" ]]; then
|
||||||
echo -e "${GRAY}First run: discovering project directories...${NC}" >&2
|
echo -e "${GRAY}First run: discovering project directories...${NC}" >&2
|
||||||
fi
|
fi
|
||||||
@@ -197,47 +186,37 @@ load_purge_config() {
|
|||||||
|
|
||||||
if [[ ${#discovered[@]} -gt 0 ]]; then
|
if [[ ${#discovered[@]} -gt 0 ]]; then
|
||||||
PURGE_SEARCH_PATHS=("${discovered[@]}")
|
PURGE_SEARCH_PATHS=("${discovered[@]}")
|
||||||
# Save for next time
|
|
||||||
save_discovered_paths "${discovered[@]}"
|
save_discovered_paths "${discovered[@]}"
|
||||||
|
|
||||||
if [[ -t 1 ]] && [[ -z "${_PURGE_DISCOVERY_SILENT:-}" ]]; then
|
if [[ -t 1 ]] && [[ -z "${_PURGE_DISCOVERY_SILENT:-}" ]]; then
|
||||||
echo -e "${GRAY}Found ${#discovered[@]} project directories, saved to config${NC}" >&2
|
echo -e "${GRAY}Found ${#discovered[@]} project directories, saved to config${NC}" >&2
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Fallback to defaults if nothing found
|
|
||||||
PURGE_SEARCH_PATHS=("${DEFAULT_PURGE_SEARCH_PATHS[@]}")
|
PURGE_SEARCH_PATHS=("${DEFAULT_PURGE_SEARCH_PATHS[@]}")
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Initialize paths on script load
|
# Initialize paths on script load.
|
||||||
load_purge_config
|
load_purge_config
|
||||||
|
|
||||||
# Args: $1 - path to check
|
# Args: $1 - path to check
|
||||||
# Check if path is safe to clean (must be inside a project directory)
|
# Safe cleanup requires the path be inside a project directory.
|
||||||
is_safe_project_artifact() {
|
is_safe_project_artifact() {
|
||||||
local path="$1"
|
local path="$1"
|
||||||
local search_path="$2"
|
local search_path="$2"
|
||||||
# Path must be absolute
|
|
||||||
if [[ "$path" != /* ]]; then
|
if [[ "$path" != /* ]]; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
# Must not be a direct child of HOME directory
|
# Must not be a direct child of the search root.
|
||||||
# e.g., ~/.gradle is NOT safe, but ~/Projects/foo/.gradle IS safe
|
|
||||||
local relative_path="${path#"$search_path"/}"
|
local relative_path="${path#"$search_path"/}"
|
||||||
local depth=$(echo "$relative_path" | tr -cd '/' | wc -c)
|
local depth=$(echo "$relative_path" | tr -cd '/' | wc -c)
|
||||||
# Require at least 1 level deep (inside a project folder)
|
|
||||||
# e.g., ~/www/weekly/node_modules is OK (depth >= 1)
|
|
||||||
# but ~/www/node_modules is NOT OK (depth < 1)
|
|
||||||
if [[ $depth -lt 1 ]]; then
|
if [[ $depth -lt 1 ]]; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Fast scan using fd or optimized find
|
# Scan purge targets using fd (fast) or pruned find.
|
||||||
# Args: $1 - search path, $2 - output file
|
|
||||||
# Args: $1 - search path, $2 - output file
|
|
||||||
# Scan for purge targets using strict project boundary checks
|
|
||||||
scan_purge_targets() {
|
scan_purge_targets() {
|
||||||
local search_path="$1"
|
local search_path="$1"
|
||||||
local output_file="$2"
|
local output_file="$2"
|
||||||
@@ -255,7 +234,6 @@ scan_purge_targets() {
|
|||||||
if [[ ! -d "$search_path" ]]; then
|
if [[ ! -d "$search_path" ]]; then
|
||||||
return
|
return
|
||||||
fi
|
fi
|
||||||
# Use fd for fast parallel search if available
|
|
||||||
if command -v fd > /dev/null 2>&1; then
|
if command -v fd > /dev/null 2>&1; then
|
||||||
local fd_args=(
|
local fd_args=(
|
||||||
"--absolute-path"
|
"--absolute-path"
|
||||||
@@ -273,47 +251,28 @@ scan_purge_targets() {
|
|||||||
for target in "${PURGE_TARGETS[@]}"; do
|
for target in "${PURGE_TARGETS[@]}"; do
|
||||||
fd_args+=("-g" "$target")
|
fd_args+=("-g" "$target")
|
||||||
done
|
done
|
||||||
# Run fd command
|
|
||||||
fd "${fd_args[@]}" . "$search_path" 2> /dev/null | while IFS= read -r item; do
|
fd "${fd_args[@]}" . "$search_path" 2> /dev/null | while IFS= read -r item; do
|
||||||
if is_safe_project_artifact "$item" "$search_path"; then
|
if is_safe_project_artifact "$item" "$search_path"; then
|
||||||
echo "$item"
|
echo "$item"
|
||||||
fi
|
fi
|
||||||
done | filter_nested_artifacts > "$output_file"
|
done | filter_nested_artifacts > "$output_file"
|
||||||
else
|
else
|
||||||
# Fallback to optimized find with pruning
|
# Pruned find avoids descending into heavy directories.
|
||||||
# This prevents descending into heavily nested dirs like node_modules once found,
|
|
||||||
# providing a massive speedup (O(project_dirs) vs O(files)).
|
|
||||||
local prune_args=()
|
local prune_args=()
|
||||||
# 1. Directories to prune (ignore completely)
|
|
||||||
local prune_dirs=(".git" "Library" ".Trash" "Applications")
|
local prune_dirs=(".git" "Library" ".Trash" "Applications")
|
||||||
for dir in "${prune_dirs[@]}"; do
|
for dir in "${prune_dirs[@]}"; do
|
||||||
# -name "DIR" -prune -o
|
|
||||||
prune_args+=("-name" "$dir" "-prune" "-o")
|
prune_args+=("-name" "$dir" "-prune" "-o")
|
||||||
done
|
done
|
||||||
# 2. Targets to find (print AND prune)
|
|
||||||
# If we find node_modules, we print it and STOP looking inside it
|
|
||||||
for target in "${PURGE_TARGETS[@]}"; do
|
for target in "${PURGE_TARGETS[@]}"; do
|
||||||
# -name "TARGET" -print -prune -o
|
|
||||||
prune_args+=("-name" "$target" "-print" "-prune" "-o")
|
prune_args+=("-name" "$target" "-print" "-prune" "-o")
|
||||||
done
|
done
|
||||||
# Run find command
|
|
||||||
# Logic: ( prune_pattern -prune -o target_pattern -print -prune )
|
|
||||||
# Note: We rely on implicit recursion for directories that don't match any pattern.
|
|
||||||
# -print is only called explicitly on targets.
|
|
||||||
# Removing the trailing -o from loop construction if necessary?
|
|
||||||
# Actually my loop adds -o at the end. I need to handle that.
|
|
||||||
# Let's verify the array construction.
|
|
||||||
# Re-building args cleanly:
|
|
||||||
local find_expr=()
|
local find_expr=()
|
||||||
# Excludes
|
|
||||||
for dir in "${prune_dirs[@]}"; do
|
for dir in "${prune_dirs[@]}"; do
|
||||||
find_expr+=("-name" "$dir" "-prune" "-o")
|
find_expr+=("-name" "$dir" "-prune" "-o")
|
||||||
done
|
done
|
||||||
# Targets
|
|
||||||
local i=0
|
local i=0
|
||||||
for target in "${PURGE_TARGETS[@]}"; do
|
for target in "${PURGE_TARGETS[@]}"; do
|
||||||
find_expr+=("-name" "$target" "-print" "-prune")
|
find_expr+=("-name" "$target" "-print" "-prune")
|
||||||
# Add -o unless it's the very last item of targets
|
|
||||||
if [[ $i -lt $((${#PURGE_TARGETS[@]} - 1)) ]]; then
|
if [[ $i -lt $((${#PURGE_TARGETS[@]} - 1)) ]]; then
|
||||||
find_expr+=("-o")
|
find_expr+=("-o")
|
||||||
fi
|
fi
|
||||||
@@ -327,15 +286,12 @@ scan_purge_targets() {
|
|||||||
done | filter_nested_artifacts > "$output_file"
|
done | filter_nested_artifacts > "$output_file"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Filter out nested artifacts (e.g. node_modules inside node_modules)
|
# Filter out nested artifacts (e.g. node_modules inside node_modules).
|
||||||
filter_nested_artifacts() {
|
filter_nested_artifacts() {
|
||||||
while IFS= read -r item; do
|
while IFS= read -r item; do
|
||||||
local parent_dir=$(dirname "$item")
|
local parent_dir=$(dirname "$item")
|
||||||
local is_nested=false
|
local is_nested=false
|
||||||
for target in "${PURGE_TARGETS[@]}"; do
|
for target in "${PURGE_TARGETS[@]}"; do
|
||||||
# Check if parent directory IS a target or IS INSIDE a target
|
|
||||||
# e.g. .../node_modules/foo/node_modules -> parent has node_modules
|
|
||||||
# Use more strict matching to avoid false positives like "my_node_modules_backup"
|
|
||||||
if [[ "$parent_dir" == *"/$target/"* || "$parent_dir" == *"/$target" ]]; then
|
if [[ "$parent_dir" == *"/$target/"* || "$parent_dir" == *"/$target" ]]; then
|
||||||
is_nested=true
|
is_nested=true
|
||||||
break
|
break
|
||||||
@@ -347,14 +303,13 @@ filter_nested_artifacts() {
|
|||||||
done
|
done
|
||||||
}
|
}
|
||||||
# Args: $1 - path
|
# Args: $1 - path
|
||||||
# Check if a path was modified recently (safety check)
|
# Check if a path was modified recently (safety check).
|
||||||
is_recently_modified() {
|
is_recently_modified() {
|
||||||
local path="$1"
|
local path="$1"
|
||||||
local age_days=$MIN_AGE_DAYS
|
local age_days=$MIN_AGE_DAYS
|
||||||
if [[ ! -e "$path" ]]; then
|
if [[ ! -e "$path" ]]; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
# Get modification time using base.sh helper (handles GNU vs BSD stat)
|
|
||||||
local mod_time
|
local mod_time
|
||||||
mod_time=$(get_file_mtime "$path")
|
mod_time=$(get_file_mtime "$path")
|
||||||
local current_time=$(date +%s)
|
local current_time=$(date +%s)
|
||||||
@@ -367,7 +322,7 @@ is_recently_modified() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Args: $1 - path
|
# Args: $1 - path
|
||||||
# Get human-readable size of directory
|
# Get directory size in KB.
|
||||||
get_dir_size_kb() {
|
get_dir_size_kb() {
|
||||||
local path="$1"
|
local path="$1"
|
||||||
if [[ -d "$path" ]]; then
|
if [[ -d "$path" ]]; then
|
||||||
@@ -376,10 +331,7 @@ get_dir_size_kb() {
|
|||||||
echo "0"
|
echo "0"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Simple category selector (for purge only)
|
# Purge category selector.
|
||||||
# Args: category names and metadata as arrays (passed via global vars)
|
|
||||||
# Uses PURGE_RECENT_CATEGORIES to mark categories with recent items (default unselected)
|
|
||||||
# Returns: selected indices in PURGE_SELECTION_RESULT (comma-separated)
|
|
||||||
select_purge_categories() {
|
select_purge_categories() {
|
||||||
local -a categories=("$@")
|
local -a categories=("$@")
|
||||||
local total_items=${#categories[@]}
|
local total_items=${#categories[@]}
|
||||||
@@ -388,8 +340,7 @@ select_purge_categories() {
|
|||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Calculate items per page based on terminal height
|
# Calculate items per page based on terminal height.
|
||||||
# Reserved: header(2) + blank(2) + footer(1) = 5 rows
|
|
||||||
_get_items_per_page() {
|
_get_items_per_page() {
|
||||||
local term_height=24
|
local term_height=24
|
||||||
if [[ -t 0 ]] || [[ -t 2 ]]; then
|
if [[ -t 0 ]] || [[ -t 2 ]]; then
|
||||||
|
|||||||
@@ -1,11 +1,9 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# System-Level Cleanup Module
|
# System-Level Cleanup Module (requires sudo).
|
||||||
# Deep system cleanup (requires sudo) and Time Machine failed backups
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
# Deep system cleanup (requires sudo)
|
# System caches, logs, and temp files.
|
||||||
clean_deep_system() {
|
clean_deep_system() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
# Clean old system caches
|
|
||||||
local cache_cleaned=0
|
local cache_cleaned=0
|
||||||
safe_sudo_find_delete "/Library/Caches" "*.cache" "$MOLE_TEMP_FILE_AGE_DAYS" "f" && cache_cleaned=1 || true
|
safe_sudo_find_delete "/Library/Caches" "*.cache" "$MOLE_TEMP_FILE_AGE_DAYS" "f" && cache_cleaned=1 || true
|
||||||
safe_sudo_find_delete "/Library/Caches" "*.tmp" "$MOLE_TEMP_FILE_AGE_DAYS" "f" && cache_cleaned=1 || true
|
safe_sudo_find_delete "/Library/Caches" "*.tmp" "$MOLE_TEMP_FILE_AGE_DAYS" "f" && cache_cleaned=1 || true
|
||||||
@@ -15,7 +13,6 @@ clean_deep_system() {
|
|||||||
safe_sudo_find_delete "/private/tmp" "*" "${MOLE_TEMP_FILE_AGE_DAYS}" "f" && tmp_cleaned=1 || true
|
safe_sudo_find_delete "/private/tmp" "*" "${MOLE_TEMP_FILE_AGE_DAYS}" "f" && tmp_cleaned=1 || true
|
||||||
safe_sudo_find_delete "/private/var/tmp" "*" "${MOLE_TEMP_FILE_AGE_DAYS}" "f" && tmp_cleaned=1 || true
|
safe_sudo_find_delete "/private/var/tmp" "*" "${MOLE_TEMP_FILE_AGE_DAYS}" "f" && tmp_cleaned=1 || true
|
||||||
[[ $tmp_cleaned -eq 1 ]] && log_success "System temp files"
|
[[ $tmp_cleaned -eq 1 ]] && log_success "System temp files"
|
||||||
# Clean crash reports
|
|
||||||
safe_sudo_find_delete "/Library/Logs/DiagnosticReports" "*" "$MOLE_CRASH_REPORT_AGE_DAYS" "f" || true
|
safe_sudo_find_delete "/Library/Logs/DiagnosticReports" "*" "$MOLE_CRASH_REPORT_AGE_DAYS" "f" || true
|
||||||
log_success "System crash reports"
|
log_success "System crash reports"
|
||||||
safe_sudo_find_delete "/private/var/log" "*.log" "$MOLE_LOG_AGE_DAYS" "f" || true
|
safe_sudo_find_delete "/private/var/log" "*.log" "$MOLE_LOG_AGE_DAYS" "f" || true
|
||||||
@@ -91,18 +88,15 @@ clean_deep_system() {
|
|||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
[[ $diag_logs_cleaned -eq 1 ]] && log_success "System diagnostic trace logs"
|
[[ $diag_logs_cleaned -eq 1 ]] && log_success "System diagnostic trace logs"
|
||||||
}
|
}
|
||||||
# Clean incomplete Time Machine backups
|
# Incomplete Time Machine backups.
|
||||||
clean_time_machine_failed_backups() {
|
clean_time_machine_failed_backups() {
|
||||||
local tm_cleaned=0
|
local tm_cleaned=0
|
||||||
# Check if tmutil is available
|
|
||||||
if ! command -v tmutil > /dev/null 2>&1; then
|
if ! command -v tmutil > /dev/null 2>&1; then
|
||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# Start spinner early (before potentially slow tmutil command)
|
|
||||||
start_section_spinner "Checking Time Machine configuration..."
|
start_section_spinner "Checking Time Machine configuration..."
|
||||||
local spinner_active=true
|
local spinner_active=true
|
||||||
# Check if Time Machine is configured (with short timeout for faster response)
|
|
||||||
local tm_info
|
local tm_info
|
||||||
tm_info=$(run_with_timeout 2 tmutil destinationinfo 2>&1 || echo "failed")
|
tm_info=$(run_with_timeout 2 tmutil destinationinfo 2>&1 || echo "failed")
|
||||||
if [[ "$tm_info" == *"No destinations configured"* || "$tm_info" == "failed" ]]; then
|
if [[ "$tm_info" == *"No destinations configured"* || "$tm_info" == "failed" ]]; then
|
||||||
@@ -119,7 +113,6 @@ clean_time_machine_failed_backups() {
|
|||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# Skip if backup is running (check actual Running status, not just daemon existence)
|
|
||||||
if tmutil status 2> /dev/null | grep -q "Running = 1"; then
|
if tmutil status 2> /dev/null | grep -q "Running = 1"; then
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
@@ -127,22 +120,19 @@ clean_time_machine_failed_backups() {
|
|||||||
echo -e " ${YELLOW}!${NC} Time Machine backup in progress, skipping cleanup"
|
echo -e " ${YELLOW}!${NC} Time Machine backup in progress, skipping cleanup"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# Update spinner message for volume scanning
|
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
start_section_spinner "Checking backup volumes..."
|
start_section_spinner "Checking backup volumes..."
|
||||||
fi
|
fi
|
||||||
# Fast pre-scan: check which volumes have Backups.backupdb (avoid expensive tmutil checks)
|
# Fast pre-scan for backup volumes to avoid slow tmutil checks.
|
||||||
local -a backup_volumes=()
|
local -a backup_volumes=()
|
||||||
for volume in /Volumes/*; do
|
for volume in /Volumes/*; do
|
||||||
[[ -d "$volume" ]] || continue
|
[[ -d "$volume" ]] || continue
|
||||||
[[ "$volume" == "/Volumes/MacintoshHD" || "$volume" == "/" ]] && continue
|
[[ "$volume" == "/Volumes/MacintoshHD" || "$volume" == "/" ]] && continue
|
||||||
[[ -L "$volume" ]] && continue
|
[[ -L "$volume" ]] && continue
|
||||||
# Quick check: does this volume have backup directories?
|
|
||||||
if [[ -d "$volume/Backups.backupdb" ]] || [[ -d "$volume/.MobileBackups" ]]; then
|
if [[ -d "$volume/Backups.backupdb" ]] || [[ -d "$volume/.MobileBackups" ]]; then
|
||||||
backup_volumes+=("$volume")
|
backup_volumes+=("$volume")
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
# If no backup volumes found, stop spinner and return
|
|
||||||
if [[ ${#backup_volumes[@]} -eq 0 ]]; then
|
if [[ ${#backup_volumes[@]} -eq 0 ]]; then
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
@@ -150,23 +140,20 @@ clean_time_machine_failed_backups() {
|
|||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# Update spinner message: we have potential backup volumes, now scan them
|
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
start_section_spinner "Scanning backup volumes..."
|
start_section_spinner "Scanning backup volumes..."
|
||||||
fi
|
fi
|
||||||
for volume in "${backup_volumes[@]}"; do
|
for volume in "${backup_volumes[@]}"; do
|
||||||
# Skip network volumes (quick check)
|
|
||||||
local fs_type
|
local fs_type
|
||||||
fs_type=$(run_with_timeout 1 command df -T "$volume" 2> /dev/null | tail -1 | awk '{print $2}' || echo "unknown")
|
fs_type=$(run_with_timeout 1 command df -T "$volume" 2> /dev/null | tail -1 | awk '{print $2}' || echo "unknown")
|
||||||
case "$fs_type" in
|
case "$fs_type" in
|
||||||
nfs | smbfs | afpfs | cifs | webdav | unknown) continue ;;
|
nfs | smbfs | afpfs | cifs | webdav | unknown) continue ;;
|
||||||
esac
|
esac
|
||||||
# HFS+ style backups (Backups.backupdb)
|
|
||||||
local backupdb_dir="$volume/Backups.backupdb"
|
local backupdb_dir="$volume/Backups.backupdb"
|
||||||
if [[ -d "$backupdb_dir" ]]; then
|
if [[ -d "$backupdb_dir" ]]; then
|
||||||
while IFS= read -r inprogress_file; do
|
while IFS= read -r inprogress_file; do
|
||||||
[[ -d "$inprogress_file" ]] || continue
|
[[ -d "$inprogress_file" ]] || continue
|
||||||
# Only delete old incomplete backups (safety window)
|
# Only delete old incomplete backups (safety window).
|
||||||
local file_mtime=$(get_file_mtime "$inprogress_file")
|
local file_mtime=$(get_file_mtime "$inprogress_file")
|
||||||
local current_time=$(date +%s)
|
local current_time=$(date +%s)
|
||||||
local hours_old=$(((current_time - file_mtime) / 3600))
|
local hours_old=$(((current_time - file_mtime) / 3600))
|
||||||
@@ -175,7 +162,6 @@ clean_time_machine_failed_backups() {
|
|||||||
fi
|
fi
|
||||||
local size_kb=$(get_path_size_kb "$inprogress_file")
|
local size_kb=$(get_path_size_kb "$inprogress_file")
|
||||||
[[ "$size_kb" -le 0 ]] && continue
|
[[ "$size_kb" -le 0 ]] && continue
|
||||||
# Stop spinner before first output
|
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
spinner_active=false
|
spinner_active=false
|
||||||
@@ -188,7 +174,6 @@ clean_time_machine_failed_backups() {
|
|||||||
note_activity
|
note_activity
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
# Real deletion
|
|
||||||
if ! command -v tmutil > /dev/null 2>&1; then
|
if ! command -v tmutil > /dev/null 2>&1; then
|
||||||
echo -e " ${YELLOW}!${NC} tmutil not available, skipping: $backup_name"
|
echo -e " ${YELLOW}!${NC} tmutil not available, skipping: $backup_name"
|
||||||
continue
|
continue
|
||||||
@@ -205,17 +190,15 @@ clean_time_machine_failed_backups() {
|
|||||||
fi
|
fi
|
||||||
done < <(run_with_timeout 15 find "$backupdb_dir" -maxdepth 3 -type d \( -name "*.inProgress" -o -name "*.inprogress" \) 2> /dev/null || true)
|
done < <(run_with_timeout 15 find "$backupdb_dir" -maxdepth 3 -type d \( -name "*.inProgress" -o -name "*.inprogress" \) 2> /dev/null || true)
|
||||||
fi
|
fi
|
||||||
# APFS style backups (.backupbundle or .sparsebundle)
|
# APFS bundles.
|
||||||
for bundle in "$volume"/*.backupbundle "$volume"/*.sparsebundle; do
|
for bundle in "$volume"/*.backupbundle "$volume"/*.sparsebundle; do
|
||||||
[[ -e "$bundle" ]] || continue
|
[[ -e "$bundle" ]] || continue
|
||||||
[[ -d "$bundle" ]] || continue
|
[[ -d "$bundle" ]] || continue
|
||||||
# Check if bundle is mounted
|
|
||||||
local bundle_name=$(basename "$bundle")
|
local bundle_name=$(basename "$bundle")
|
||||||
local mounted_path=$(hdiutil info 2> /dev/null | grep -A 5 "image-path.*$bundle_name" | grep "/Volumes/" | awk '{print $1}' | head -1 || echo "")
|
local mounted_path=$(hdiutil info 2> /dev/null | grep -A 5 "image-path.*$bundle_name" | grep "/Volumes/" | awk '{print $1}' | head -1 || echo "")
|
||||||
if [[ -n "$mounted_path" && -d "$mounted_path" ]]; then
|
if [[ -n "$mounted_path" && -d "$mounted_path" ]]; then
|
||||||
while IFS= read -r inprogress_file; do
|
while IFS= read -r inprogress_file; do
|
||||||
[[ -d "$inprogress_file" ]] || continue
|
[[ -d "$inprogress_file" ]] || continue
|
||||||
# Only delete old incomplete backups (safety window)
|
|
||||||
local file_mtime=$(get_file_mtime "$inprogress_file")
|
local file_mtime=$(get_file_mtime "$inprogress_file")
|
||||||
local current_time=$(date +%s)
|
local current_time=$(date +%s)
|
||||||
local hours_old=$(((current_time - file_mtime) / 3600))
|
local hours_old=$(((current_time - file_mtime) / 3600))
|
||||||
@@ -224,7 +207,6 @@ clean_time_machine_failed_backups() {
|
|||||||
fi
|
fi
|
||||||
local size_kb=$(get_path_size_kb "$inprogress_file")
|
local size_kb=$(get_path_size_kb "$inprogress_file")
|
||||||
[[ "$size_kb" -le 0 ]] && continue
|
[[ "$size_kb" -le 0 ]] && continue
|
||||||
# Stop spinner before first output
|
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
spinner_active=false
|
spinner_active=false
|
||||||
@@ -237,7 +219,6 @@ clean_time_machine_failed_backups() {
|
|||||||
note_activity
|
note_activity
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
# Real deletion
|
|
||||||
if ! command -v tmutil > /dev/null 2>&1; then
|
if ! command -v tmutil > /dev/null 2>&1; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
@@ -255,7 +236,6 @@ clean_time_machine_failed_backups() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
# Stop spinner if still active (no backups found)
|
|
||||||
if [[ "$spinner_active" == "true" ]]; then
|
if [[ "$spinner_active" == "true" ]]; then
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
fi
|
fi
|
||||||
@@ -263,33 +243,27 @@ clean_time_machine_failed_backups() {
|
|||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} No incomplete backups found"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean local APFS snapshots (keep the most recent snapshot)
|
# Local APFS snapshots (keep the most recent).
|
||||||
clean_local_snapshots() {
|
clean_local_snapshots() {
|
||||||
# Check if tmutil is available
|
|
||||||
if ! command -v tmutil > /dev/null 2>&1; then
|
if ! command -v tmutil > /dev/null 2>&1; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
start_section_spinner "Checking local snapshots..."
|
start_section_spinner "Checking local snapshots..."
|
||||||
# Check for local snapshots
|
|
||||||
local snapshot_list
|
local snapshot_list
|
||||||
snapshot_list=$(tmutil listlocalsnapshots / 2> /dev/null)
|
snapshot_list=$(tmutil listlocalsnapshots / 2> /dev/null)
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
[[ -z "$snapshot_list" ]] && return 0
|
[[ -z "$snapshot_list" ]] && return 0
|
||||||
# Parse and clean snapshots
|
|
||||||
local cleaned_count=0
|
local cleaned_count=0
|
||||||
local total_cleaned_size=0 # Estimation not possible without thin
|
local total_cleaned_size=0 # Estimation not possible without thin
|
||||||
local newest_ts=0
|
local newest_ts=0
|
||||||
local newest_name=""
|
local newest_name=""
|
||||||
local -a snapshots=()
|
local -a snapshots=()
|
||||||
# Find the most recent snapshot to keep at least one version
|
|
||||||
while IFS= read -r line; do
|
while IFS= read -r line; do
|
||||||
# Format: com.apple.TimeMachine.2023-10-25-120000
|
|
||||||
if [[ "$line" =~ com\.apple\.TimeMachine\.([0-9]{4})-([0-9]{2})-([0-9]{2})-([0-9]{6}) ]]; then
|
if [[ "$line" =~ com\.apple\.TimeMachine\.([0-9]{4})-([0-9]{2})-([0-9]{2})-([0-9]{6}) ]]; then
|
||||||
local snap_name="${BASH_REMATCH[0]}"
|
local snap_name="${BASH_REMATCH[0]}"
|
||||||
snapshots+=("$snap_name")
|
snapshots+=("$snap_name")
|
||||||
local date_str="${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]} ${BASH_REMATCH[4]:0:2}:${BASH_REMATCH[4]:2:2}:${BASH_REMATCH[4]:4:2}"
|
local date_str="${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]} ${BASH_REMATCH[4]:0:2}:${BASH_REMATCH[4]:2:2}:${BASH_REMATCH[4]:4:2}"
|
||||||
local snap_ts=$(date -j -f "%Y-%m-%d %H:%M:%S" "$date_str" "+%s" 2> /dev/null || echo "0")
|
local snap_ts=$(date -j -f "%Y-%m-%d %H:%M:%S" "$date_str" "+%s" 2> /dev/null || echo "0")
|
||||||
# Skip if parsing failed
|
|
||||||
[[ "$snap_ts" == "0" ]] && continue
|
[[ "$snap_ts" == "0" ]] && continue
|
||||||
if [[ "$snap_ts" -gt "$newest_ts" ]]; then
|
if [[ "$snap_ts" -gt "$newest_ts" ]]; then
|
||||||
newest_ts="$snap_ts"
|
newest_ts="$snap_ts"
|
||||||
@@ -332,16 +306,13 @@ clean_local_snapshots() {
|
|||||||
|
|
||||||
local snap_name
|
local snap_name
|
||||||
for snap_name in "${snapshots[@]}"; do
|
for snap_name in "${snapshots[@]}"; do
|
||||||
# Format: com.apple.TimeMachine.2023-10-25-120000
|
|
||||||
if [[ "$snap_name" =~ com\.apple\.TimeMachine\.([0-9]{4})-([0-9]{2})-([0-9]{2})-([0-9]{6}) ]]; then
|
if [[ "$snap_name" =~ com\.apple\.TimeMachine\.([0-9]{4})-([0-9]{2})-([0-9]{2})-([0-9]{6}) ]]; then
|
||||||
# Remove all but the most recent snapshot
|
|
||||||
if [[ "${BASH_REMATCH[0]}" != "$newest_name" ]]; then
|
if [[ "${BASH_REMATCH[0]}" != "$newest_name" ]]; then
|
||||||
if [[ "$DRY_RUN" == "true" ]]; then
|
if [[ "$DRY_RUN" == "true" ]]; then
|
||||||
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} Local snapshot: $snap_name ${YELLOW}dry-run${NC}"
|
echo -e " ${YELLOW}${ICON_DRY_RUN}${NC} Local snapshot: $snap_name ${YELLOW}dry-run${NC}"
|
||||||
((cleaned_count++))
|
((cleaned_count++))
|
||||||
note_activity
|
note_activity
|
||||||
else
|
else
|
||||||
# Secure removal
|
|
||||||
if sudo tmutil deletelocalsnapshots "${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]}-${BASH_REMATCH[4]}" > /dev/null 2>&1; then
|
if sudo tmutil deletelocalsnapshots "${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]}-${BASH_REMATCH[4]}" > /dev/null 2>&1; then
|
||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Removed snapshot: $snap_name"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} Removed snapshot: $snap_name"
|
||||||
((cleaned_count++))
|
((cleaned_count++))
|
||||||
|
|||||||
@@ -62,7 +62,7 @@ scan_external_volumes() {
|
|||||||
done
|
done
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
}
|
}
|
||||||
# Clean Finder metadata (.DS_Store files)
|
# Finder metadata (.DS_Store).
|
||||||
clean_finder_metadata() {
|
clean_finder_metadata() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
if [[ "$PROTECT_FINDER_METADATA" == "true" ]]; then
|
if [[ "$PROTECT_FINDER_METADATA" == "true" ]]; then
|
||||||
@@ -72,16 +72,14 @@ clean_finder_metadata() {
|
|||||||
fi
|
fi
|
||||||
clean_ds_store_tree "$HOME" "Home directory (.DS_Store)"
|
clean_ds_store_tree "$HOME" "Home directory (.DS_Store)"
|
||||||
}
|
}
|
||||||
# Clean macOS system caches
|
# macOS system caches and user-level leftovers.
|
||||||
clean_macos_system_caches() {
|
clean_macos_system_caches() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
# Clean saved application states with protection for System Settings
|
# safe_clean already checks protected paths.
|
||||||
# Note: safe_clean already calls should_protect_path for each file
|
|
||||||
safe_clean ~/Library/Saved\ Application\ State/* "Saved application states" || true
|
safe_clean ~/Library/Saved\ Application\ State/* "Saved application states" || true
|
||||||
safe_clean ~/Library/Caches/com.apple.photoanalysisd "Photo analysis cache" || true
|
safe_clean ~/Library/Caches/com.apple.photoanalysisd "Photo analysis cache" || true
|
||||||
safe_clean ~/Library/Caches/com.apple.akd "Apple ID cache" || true
|
safe_clean ~/Library/Caches/com.apple.akd "Apple ID cache" || true
|
||||||
safe_clean ~/Library/Caches/com.apple.WebKit.Networking/* "WebKit network cache" || true
|
safe_clean ~/Library/Caches/com.apple.WebKit.Networking/* "WebKit network cache" || true
|
||||||
# Extra user items
|
|
||||||
safe_clean ~/Library/DiagnosticReports/* "Diagnostic reports" || true
|
safe_clean ~/Library/DiagnosticReports/* "Diagnostic reports" || true
|
||||||
safe_clean ~/Library/Caches/com.apple.QuickLook.thumbnailcache "QuickLook thumbnails" || true
|
safe_clean ~/Library/Caches/com.apple.QuickLook.thumbnailcache "QuickLook thumbnails" || true
|
||||||
safe_clean ~/Library/Caches/Quick\ Look/* "QuickLook cache" || true
|
safe_clean ~/Library/Caches/Quick\ Look/* "QuickLook cache" || true
|
||||||
@@ -158,28 +156,26 @@ clean_mail_downloads() {
|
|||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean sandboxed app caches
|
# Sandboxed app caches.
|
||||||
clean_sandboxed_app_caches() {
|
clean_sandboxed_app_caches() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
safe_clean ~/Library/Containers/com.apple.wallpaper.agent/Data/Library/Caches/* "Wallpaper agent cache"
|
safe_clean ~/Library/Containers/com.apple.wallpaper.agent/Data/Library/Caches/* "Wallpaper agent cache"
|
||||||
safe_clean ~/Library/Containers/com.apple.mediaanalysisd/Data/Library/Caches/* "Media analysis cache"
|
safe_clean ~/Library/Containers/com.apple.mediaanalysisd/Data/Library/Caches/* "Media analysis cache"
|
||||||
safe_clean ~/Library/Containers/com.apple.AppStore/Data/Library/Caches/* "App Store cache"
|
safe_clean ~/Library/Containers/com.apple.AppStore/Data/Library/Caches/* "App Store cache"
|
||||||
safe_clean ~/Library/Containers/com.apple.configurator.xpc.InternetService/Data/tmp/* "Apple Configurator temp files"
|
safe_clean ~/Library/Containers/com.apple.configurator.xpc.InternetService/Data/tmp/* "Apple Configurator temp files"
|
||||||
# Clean sandboxed app caches - iterate quietly to avoid UI flashing
|
|
||||||
local containers_dir="$HOME/Library/Containers"
|
local containers_dir="$HOME/Library/Containers"
|
||||||
[[ ! -d "$containers_dir" ]] && return 0
|
[[ ! -d "$containers_dir" ]] && return 0
|
||||||
start_section_spinner "Scanning sandboxed apps..."
|
start_section_spinner "Scanning sandboxed apps..."
|
||||||
local total_size=0
|
local total_size=0
|
||||||
local cleaned_count=0
|
local cleaned_count=0
|
||||||
local found_any=false
|
local found_any=false
|
||||||
# Enable nullglob for safe globbing; restore afterwards
|
# Use nullglob to avoid literal globs.
|
||||||
local _ng_state
|
local _ng_state
|
||||||
_ng_state=$(shopt -p nullglob || true)
|
_ng_state=$(shopt -p nullglob || true)
|
||||||
shopt -s nullglob
|
shopt -s nullglob
|
||||||
for container_dir in "$containers_dir"/*; do
|
for container_dir in "$containers_dir"/*; do
|
||||||
process_container_cache "$container_dir"
|
process_container_cache "$container_dir"
|
||||||
done
|
done
|
||||||
# Restore nullglob to previous state
|
|
||||||
eval "$_ng_state"
|
eval "$_ng_state"
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
if [[ "$found_any" == "true" ]]; then
|
if [[ "$found_any" == "true" ]]; then
|
||||||
@@ -195,11 +191,10 @@ clean_sandboxed_app_caches() {
|
|||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Process a single container cache directory (reduces nesting)
|
# Process a single container cache directory.
|
||||||
process_container_cache() {
|
process_container_cache() {
|
||||||
local container_dir="$1"
|
local container_dir="$1"
|
||||||
[[ -d "$container_dir" ]] || return 0
|
[[ -d "$container_dir" ]] || return 0
|
||||||
# Extract bundle ID and check protection status early
|
|
||||||
local bundle_id=$(basename "$container_dir")
|
local bundle_id=$(basename "$container_dir")
|
||||||
if is_critical_system_component "$bundle_id"; then
|
if is_critical_system_component "$bundle_id"; then
|
||||||
return 0
|
return 0
|
||||||
@@ -208,17 +203,15 @@ process_container_cache() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
local cache_dir="$container_dir/Data/Library/Caches"
|
local cache_dir="$container_dir/Data/Library/Caches"
|
||||||
# Check if dir exists and has content
|
|
||||||
[[ -d "$cache_dir" ]] || return 0
|
[[ -d "$cache_dir" ]] || return 0
|
||||||
# Fast check if empty using find (more efficient than ls)
|
# Fast non-empty check.
|
||||||
if find "$cache_dir" -mindepth 1 -maxdepth 1 -print -quit 2> /dev/null | grep -q .; then
|
if find "$cache_dir" -mindepth 1 -maxdepth 1 -print -quit 2> /dev/null | grep -q .; then
|
||||||
# Use global variables from caller for tracking
|
|
||||||
local size=$(get_path_size_kb "$cache_dir")
|
local size=$(get_path_size_kb "$cache_dir")
|
||||||
((total_size += size))
|
((total_size += size))
|
||||||
found_any=true
|
found_any=true
|
||||||
((cleaned_count++))
|
((cleaned_count++))
|
||||||
if [[ "$DRY_RUN" != "true" ]]; then
|
if [[ "$DRY_RUN" != "true" ]]; then
|
||||||
# Clean contents safely with local nullglob management
|
# Clean contents safely with local nullglob.
|
||||||
local _ng_state
|
local _ng_state
|
||||||
_ng_state=$(shopt -p nullglob || true)
|
_ng_state=$(shopt -p nullglob || true)
|
||||||
shopt -s nullglob
|
shopt -s nullglob
|
||||||
@@ -230,11 +223,11 @@ process_container_cache() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Clean browser caches (Safari, Chrome, Edge, Firefox, etc.)
|
# Browser caches (Safari/Chrome/Edge/Firefox).
|
||||||
clean_browsers() {
|
clean_browsers() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
safe_clean ~/Library/Caches/com.apple.Safari/* "Safari cache"
|
safe_clean ~/Library/Caches/com.apple.Safari/* "Safari cache"
|
||||||
# Chrome/Chromium
|
# Chrome/Chromium.
|
||||||
safe_clean ~/Library/Caches/Google/Chrome/* "Chrome cache"
|
safe_clean ~/Library/Caches/Google/Chrome/* "Chrome cache"
|
||||||
safe_clean ~/Library/Application\ Support/Google/Chrome/*/Application\ Cache/* "Chrome app cache"
|
safe_clean ~/Library/Application\ Support/Google/Chrome/*/Application\ Cache/* "Chrome app cache"
|
||||||
safe_clean ~/Library/Application\ Support/Google/Chrome/*/GPUCache/* "Chrome GPU cache"
|
safe_clean ~/Library/Application\ Support/Google/Chrome/*/GPUCache/* "Chrome GPU cache"
|
||||||
@@ -251,7 +244,7 @@ clean_browsers() {
|
|||||||
safe_clean ~/Library/Caches/zen/* "Zen cache"
|
safe_clean ~/Library/Caches/zen/* "Zen cache"
|
||||||
safe_clean ~/Library/Application\ Support/Firefox/Profiles/*/cache2/* "Firefox profile cache"
|
safe_clean ~/Library/Application\ Support/Firefox/Profiles/*/cache2/* "Firefox profile cache"
|
||||||
}
|
}
|
||||||
# Clean cloud storage app caches
|
# Cloud storage caches.
|
||||||
clean_cloud_storage() {
|
clean_cloud_storage() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
safe_clean ~/Library/Caches/com.dropbox.* "Dropbox cache"
|
safe_clean ~/Library/Caches/com.dropbox.* "Dropbox cache"
|
||||||
@@ -262,7 +255,7 @@ clean_cloud_storage() {
|
|||||||
safe_clean ~/Library/Caches/com.box.desktop "Box cache"
|
safe_clean ~/Library/Caches/com.box.desktop "Box cache"
|
||||||
safe_clean ~/Library/Caches/com.microsoft.OneDrive "OneDrive cache"
|
safe_clean ~/Library/Caches/com.microsoft.OneDrive "OneDrive cache"
|
||||||
}
|
}
|
||||||
# Clean office application caches
|
# Office app caches.
|
||||||
clean_office_applications() {
|
clean_office_applications() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
safe_clean ~/Library/Caches/com.microsoft.Word "Microsoft Word cache"
|
safe_clean ~/Library/Caches/com.microsoft.Word "Microsoft Word cache"
|
||||||
@@ -274,7 +267,7 @@ clean_office_applications() {
|
|||||||
safe_clean ~/Library/Caches/org.mozilla.thunderbird/* "Thunderbird cache"
|
safe_clean ~/Library/Caches/org.mozilla.thunderbird/* "Thunderbird cache"
|
||||||
safe_clean ~/Library/Caches/com.apple.mail/* "Apple Mail cache"
|
safe_clean ~/Library/Caches/com.apple.mail/* "Apple Mail cache"
|
||||||
}
|
}
|
||||||
# Clean virtualization tools
|
# Virtualization caches.
|
||||||
clean_virtualization_tools() {
|
clean_virtualization_tools() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
safe_clean ~/Library/Caches/com.vmware.fusion "VMware Fusion cache"
|
safe_clean ~/Library/Caches/com.vmware.fusion "VMware Fusion cache"
|
||||||
@@ -282,7 +275,7 @@ clean_virtualization_tools() {
|
|||||||
safe_clean ~/VirtualBox\ VMs/.cache "VirtualBox cache"
|
safe_clean ~/VirtualBox\ VMs/.cache "VirtualBox cache"
|
||||||
safe_clean ~/.vagrant.d/tmp/* "Vagrant temporary files"
|
safe_clean ~/.vagrant.d/tmp/* "Vagrant temporary files"
|
||||||
}
|
}
|
||||||
# Clean Application Support logs and caches
|
# Application Support logs/caches.
|
||||||
clean_application_support_logs() {
|
clean_application_support_logs() {
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
if [[ ! -d "$HOME/Library/Application Support" ]] || ! ls "$HOME/Library/Application Support" > /dev/null 2>&1; then
|
if [[ ! -d "$HOME/Library/Application Support" ]] || ! ls "$HOME/Library/Application Support" > /dev/null 2>&1; then
|
||||||
@@ -294,11 +287,10 @@ clean_application_support_logs() {
|
|||||||
local total_size=0
|
local total_size=0
|
||||||
local cleaned_count=0
|
local cleaned_count=0
|
||||||
local found_any=false
|
local found_any=false
|
||||||
# Enable nullglob for safe globbing
|
# Enable nullglob for safe globbing.
|
||||||
local _ng_state
|
local _ng_state
|
||||||
_ng_state=$(shopt -p nullglob || true)
|
_ng_state=$(shopt -p nullglob || true)
|
||||||
shopt -s nullglob
|
shopt -s nullglob
|
||||||
# Clean log directories and cache patterns
|
|
||||||
for app_dir in ~/Library/Application\ Support/*; do
|
for app_dir in ~/Library/Application\ Support/*; do
|
||||||
[[ -d "$app_dir" ]] || continue
|
[[ -d "$app_dir" ]] || continue
|
||||||
local app_name=$(basename "$app_dir")
|
local app_name=$(basename "$app_dir")
|
||||||
@@ -333,7 +325,7 @@ clean_application_support_logs() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
# Clean Group Containers logs
|
# Group Containers logs (explicit allowlist).
|
||||||
local known_group_containers=(
|
local known_group_containers=(
|
||||||
"group.com.apple.contentdelivery"
|
"group.com.apple.contentdelivery"
|
||||||
)
|
)
|
||||||
@@ -357,7 +349,6 @@ clean_application_support_logs() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
# Restore nullglob to previous state
|
|
||||||
eval "$_ng_state"
|
eval "$_ng_state"
|
||||||
stop_section_spinner
|
stop_section_spinner
|
||||||
if [[ "$found_any" == "true" ]]; then
|
if [[ "$found_any" == "true" ]]; then
|
||||||
@@ -373,10 +364,10 @@ clean_application_support_logs() {
|
|||||||
note_activity
|
note_activity
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
# Check and show iOS device backup info
|
# iOS device backup info.
|
||||||
check_ios_device_backups() {
|
check_ios_device_backups() {
|
||||||
local backup_dir="$HOME/Library/Application Support/MobileSync/Backup"
|
local backup_dir="$HOME/Library/Application Support/MobileSync/Backup"
|
||||||
# Simplified check without find to avoid hanging
|
# Simplified check without find to avoid hanging.
|
||||||
if [[ -d "$backup_dir" ]]; then
|
if [[ -d "$backup_dir" ]]; then
|
||||||
local backup_kb=$(get_path_size_kb "$backup_dir")
|
local backup_kb=$(get_path_size_kb "$backup_dir")
|
||||||
if [[ -n "${backup_kb:-}" && "$backup_kb" -gt 102400 ]]; then
|
if [[ -n "${backup_kb:-}" && "$backup_kb" -gt 102400 ]]; then
|
||||||
@@ -390,8 +381,7 @@ check_ios_device_backups() {
|
|||||||
fi
|
fi
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Env: IS_M_SERIES
|
# Apple Silicon specific caches (IS_M_SERIES).
|
||||||
# Clean Apple Silicon specific caches
|
|
||||||
clean_apple_silicon_caches() {
|
clean_apple_silicon_caches() {
|
||||||
if [[ "${IS_M_SERIES:-false}" != "true" ]]; then
|
if [[ "${IS_M_SERIES:-false}" != "true" ]]; then
|
||||||
return 0
|
return 0
|
||||||
|
|||||||
@@ -12,9 +12,7 @@ readonly MOLE_APP_PROTECTION_LOADED=1
|
|||||||
_MOLE_CORE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
_MOLE_CORE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
[[ -z "${MOLE_BASE_LOADED:-}" ]] && source "$_MOLE_CORE_DIR/base.sh"
|
[[ -z "${MOLE_BASE_LOADED:-}" ]] && source "$_MOLE_CORE_DIR/base.sh"
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Application Management
|
# Application Management
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Critical system components protected from uninstallation
|
# Critical system components protected from uninstallation
|
||||||
readonly SYSTEM_CRITICAL_BUNDLES=(
|
readonly SYSTEM_CRITICAL_BUNDLES=(
|
||||||
@@ -70,9 +68,7 @@ readonly SYSTEM_CRITICAL_BUNDLES=(
|
|||||||
|
|
||||||
# Applications with sensitive data; protected during cleanup but removable
|
# Applications with sensitive data; protected during cleanup but removable
|
||||||
readonly DATA_PROTECTED_BUNDLES=(
|
readonly DATA_PROTECTED_BUNDLES=(
|
||||||
# ============================================================================
|
|
||||||
# System Utilities & Cleanup Tools
|
# System Utilities & Cleanup Tools
|
||||||
# ============================================================================
|
|
||||||
"com.nektony.*" # App Cleaner & Uninstaller
|
"com.nektony.*" # App Cleaner & Uninstaller
|
||||||
"com.macpaw.*" # CleanMyMac, CleanMaster
|
"com.macpaw.*" # CleanMyMac, CleanMaster
|
||||||
"com.freemacsoft.AppCleaner" # AppCleaner
|
"com.freemacsoft.AppCleaner" # AppCleaner
|
||||||
@@ -82,9 +78,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.grandperspectiv.*" # GrandPerspective
|
"com.grandperspectiv.*" # GrandPerspective
|
||||||
"com.binaryfruit.*" # FusionCast
|
"com.binaryfruit.*" # FusionCast
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Password Managers & Security
|
# Password Managers & Security
|
||||||
# ============================================================================
|
|
||||||
"com.1password.*" # 1Password
|
"com.1password.*" # 1Password
|
||||||
"com.agilebits.*" # 1Password legacy
|
"com.agilebits.*" # 1Password legacy
|
||||||
"com.lastpass.*" # LastPass
|
"com.lastpass.*" # LastPass
|
||||||
@@ -95,9 +89,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.authy.*" # Authy
|
"com.authy.*" # Authy
|
||||||
"com.yubico.*" # YubiKey Manager
|
"com.yubico.*" # YubiKey Manager
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Development Tools - IDEs & Editors
|
# Development Tools - IDEs & Editors
|
||||||
# ============================================================================
|
|
||||||
"com.jetbrains.*" # JetBrains IDEs (IntelliJ, DataGrip, etc.)
|
"com.jetbrains.*" # JetBrains IDEs (IntelliJ, DataGrip, etc.)
|
||||||
"JetBrains*" # JetBrains Application Support folders
|
"JetBrains*" # JetBrains Application Support folders
|
||||||
"com.microsoft.VSCode" # Visual Studio Code
|
"com.microsoft.VSCode" # Visual Studio Code
|
||||||
@@ -112,9 +104,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"abnerworks.Typora" # Typora (Markdown editor)
|
"abnerworks.Typora" # Typora (Markdown editor)
|
||||||
"com.uranusjr.macdown" # MacDown
|
"com.uranusjr.macdown" # MacDown
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# AI & LLM Tools
|
# AI & LLM Tools
|
||||||
# ============================================================================
|
|
||||||
"com.todesktop.*" # Cursor (often uses generic todesktop ID)
|
"com.todesktop.*" # Cursor (often uses generic todesktop ID)
|
||||||
"Cursor" # Cursor App Support
|
"Cursor" # Cursor App Support
|
||||||
"com.anthropic.claude*" # Claude
|
"com.anthropic.claude*" # Claude
|
||||||
@@ -136,9 +126,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.quora.poe.electron" # Poe
|
"com.quora.poe.electron" # Poe
|
||||||
"chat.openai.com.*" # OpenAI web wrappers
|
"chat.openai.com.*" # OpenAI web wrappers
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Development Tools - Database Clients
|
# Development Tools - Database Clients
|
||||||
# ============================================================================
|
|
||||||
"com.sequelpro.*" # Sequel Pro
|
"com.sequelpro.*" # Sequel Pro
|
||||||
"com.sequel-ace.*" # Sequel Ace
|
"com.sequel-ace.*" # Sequel Ace
|
||||||
"com.tinyapp.*" # TablePlus
|
"com.tinyapp.*" # TablePlus
|
||||||
@@ -151,9 +139,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.valentina-db.Valentina-Studio" # Valentina Studio
|
"com.valentina-db.Valentina-Studio" # Valentina Studio
|
||||||
"com.dbvis.DbVisualizer" # DbVisualizer
|
"com.dbvis.DbVisualizer" # DbVisualizer
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Development Tools - API & Network
|
# Development Tools - API & Network
|
||||||
# ============================================================================
|
|
||||||
"com.postmanlabs.mac" # Postman
|
"com.postmanlabs.mac" # Postman
|
||||||
"com.konghq.insomnia" # Insomnia
|
"com.konghq.insomnia" # Insomnia
|
||||||
"com.CharlesProxy.*" # Charles Proxy
|
"com.CharlesProxy.*" # Charles Proxy
|
||||||
@@ -164,9 +150,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.telerik.Fiddler" # Fiddler
|
"com.telerik.Fiddler" # Fiddler
|
||||||
"com.usebruno.app" # Bruno (API client)
|
"com.usebruno.app" # Bruno (API client)
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Network Proxy & VPN Tools (pattern-based protection)
|
# Network Proxy & VPN Tools (pattern-based protection)
|
||||||
# ============================================================================
|
|
||||||
# Clash variants
|
# Clash variants
|
||||||
"*clash*" # All Clash variants (ClashX, ClashX Pro, Clash Verge, etc)
|
"*clash*" # All Clash variants (ClashX, ClashX Pro, Clash Verge, etc)
|
||||||
"*Clash*" # Capitalized variants
|
"*Clash*" # Capitalized variants
|
||||||
@@ -217,9 +201,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"*Fliqlo*" # Fliqlo screensaver (all case variants)
|
"*Fliqlo*" # Fliqlo screensaver (all case variants)
|
||||||
"*fliqlo*" # Fliqlo lowercase
|
"*fliqlo*" # Fliqlo lowercase
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Development Tools - Git & Version Control
|
# Development Tools - Git & Version Control
|
||||||
# ============================================================================
|
|
||||||
"com.github.GitHubDesktop" # GitHub Desktop
|
"com.github.GitHubDesktop" # GitHub Desktop
|
||||||
"com.sublimemerge" # Sublime Merge
|
"com.sublimemerge" # Sublime Merge
|
||||||
"com.torusknot.SourceTreeNotMAS" # SourceTree
|
"com.torusknot.SourceTreeNotMAS" # SourceTree
|
||||||
@@ -229,9 +211,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.fork.Fork" # Fork
|
"com.fork.Fork" # Fork
|
||||||
"com.axosoft.gitkraken" # GitKraken
|
"com.axosoft.gitkraken" # GitKraken
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Development Tools - Terminal & Shell
|
# Development Tools - Terminal & Shell
|
||||||
# ============================================================================
|
|
||||||
"com.googlecode.iterm2" # iTerm2
|
"com.googlecode.iterm2" # iTerm2
|
||||||
"net.kovidgoyal.kitty" # Kitty
|
"net.kovidgoyal.kitty" # Kitty
|
||||||
"io.alacritty" # Alacritty
|
"io.alacritty" # Alacritty
|
||||||
@@ -242,9 +222,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"dev.warp.Warp-Stable" # Warp
|
"dev.warp.Warp-Stable" # Warp
|
||||||
"com.termius-dmg" # Termius (SSH client)
|
"com.termius-dmg" # Termius (SSH client)
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Development Tools - Docker & Virtualization
|
# Development Tools - Docker & Virtualization
|
||||||
# ============================================================================
|
|
||||||
"com.docker.docker" # Docker Desktop
|
"com.docker.docker" # Docker Desktop
|
||||||
"com.getutm.UTM" # UTM
|
"com.getutm.UTM" # UTM
|
||||||
"com.vmware.fusion" # VMware Fusion
|
"com.vmware.fusion" # VMware Fusion
|
||||||
@@ -253,9 +231,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.vagrant.*" # Vagrant
|
"com.vagrant.*" # Vagrant
|
||||||
"com.orbstack.OrbStack" # OrbStack
|
"com.orbstack.OrbStack" # OrbStack
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# System Monitoring & Performance
|
# System Monitoring & Performance
|
||||||
# ============================================================================
|
|
||||||
"com.bjango.istatmenus*" # iStat Menus
|
"com.bjango.istatmenus*" # iStat Menus
|
||||||
"eu.exelban.Stats" # Stats
|
"eu.exelban.Stats" # Stats
|
||||||
"com.monitorcontrol.*" # MonitorControl
|
"com.monitorcontrol.*" # MonitorControl
|
||||||
@@ -264,9 +240,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.activity-indicator.app" # Activity Indicator
|
"com.activity-indicator.app" # Activity Indicator
|
||||||
"net.cindori.sensei" # Sensei
|
"net.cindori.sensei" # Sensei
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Window Management & Productivity
|
# Window Management & Productivity
|
||||||
# ============================================================================
|
|
||||||
"com.macitbetter.*" # BetterTouchTool, BetterSnapTool
|
"com.macitbetter.*" # BetterTouchTool, BetterSnapTool
|
||||||
"com.hegenberg.*" # BetterTouchTool legacy
|
"com.hegenberg.*" # BetterTouchTool legacy
|
||||||
"com.manytricks.*" # Moom, Witch, Name Mangler, Resolutionator
|
"com.manytricks.*" # Moom, Witch, Name Mangler, Resolutionator
|
||||||
@@ -284,9 +258,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.gaosun.eul" # eul (system monitor)
|
"com.gaosun.eul" # eul (system monitor)
|
||||||
"com.pointum.hazeover" # HazeOver
|
"com.pointum.hazeover" # HazeOver
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Launcher & Automation
|
# Launcher & Automation
|
||||||
# ============================================================================
|
|
||||||
"com.runningwithcrayons.Alfred" # Alfred
|
"com.runningwithcrayons.Alfred" # Alfred
|
||||||
"com.raycast.macos" # Raycast
|
"com.raycast.macos" # Raycast
|
||||||
"com.blacktree.Quicksilver" # Quicksilver
|
"com.blacktree.Quicksilver" # Quicksilver
|
||||||
@@ -297,9 +269,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"org.pqrs.Karabiner-Elements" # Karabiner-Elements
|
"org.pqrs.Karabiner-Elements" # Karabiner-Elements
|
||||||
"com.apple.Automator" # Automator (system, but keep user workflows)
|
"com.apple.Automator" # Automator (system, but keep user workflows)
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Note-Taking & Documentation
|
# Note-Taking & Documentation
|
||||||
# ============================================================================
|
|
||||||
"com.bear-writer.*" # Bear
|
"com.bear-writer.*" # Bear
|
||||||
"com.typora.*" # Typora
|
"com.typora.*" # Typora
|
||||||
"com.ulyssesapp.*" # Ulysses
|
"com.ulyssesapp.*" # Ulysses
|
||||||
@@ -318,9 +288,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.reflect.ReflectApp" # Reflect
|
"com.reflect.ReflectApp" # Reflect
|
||||||
"com.inkdrop.*" # Inkdrop
|
"com.inkdrop.*" # Inkdrop
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Design & Creative Tools
|
# Design & Creative Tools
|
||||||
# ============================================================================
|
|
||||||
"com.adobe.*" # Adobe Creative Suite
|
"com.adobe.*" # Adobe Creative Suite
|
||||||
"com.bohemiancoding.*" # Sketch
|
"com.bohemiancoding.*" # Sketch
|
||||||
"com.figma.*" # Figma
|
"com.figma.*" # Figma
|
||||||
@@ -338,9 +306,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.autodesk.*" # Autodesk products
|
"com.autodesk.*" # Autodesk products
|
||||||
"com.sketchup.*" # SketchUp
|
"com.sketchup.*" # SketchUp
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Communication & Collaboration
|
# Communication & Collaboration
|
||||||
# ============================================================================
|
|
||||||
"com.tencent.xinWeChat" # WeChat (Chinese users)
|
"com.tencent.xinWeChat" # WeChat (Chinese users)
|
||||||
"com.tencent.qq" # QQ
|
"com.tencent.qq" # QQ
|
||||||
"com.alibaba.DingTalkMac" # DingTalk
|
"com.alibaba.DingTalkMac" # DingTalk
|
||||||
@@ -363,9 +329,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.postbox-inc.postbox" # Postbox
|
"com.postbox-inc.postbox" # Postbox
|
||||||
"com.tinyspeck.slackmacgap" # Slack legacy
|
"com.tinyspeck.slackmacgap" # Slack legacy
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Task Management & Productivity
|
# Task Management & Productivity
|
||||||
# ============================================================================
|
|
||||||
"com.omnigroup.OmniFocus*" # OmniFocus
|
"com.omnigroup.OmniFocus*" # OmniFocus
|
||||||
"com.culturedcode.*" # Things
|
"com.culturedcode.*" # Things
|
||||||
"com.todoist.*" # Todoist
|
"com.todoist.*" # Todoist
|
||||||
@@ -380,9 +344,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.notion.id" # Notion (also note-taking)
|
"com.notion.id" # Notion (also note-taking)
|
||||||
"com.linear.linear" # Linear
|
"com.linear.linear" # Linear
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# File Transfer & Sync
|
# File Transfer & Sync
|
||||||
# ============================================================================
|
|
||||||
"com.panic.transmit*" # Transmit (FTP/SFTP)
|
"com.panic.transmit*" # Transmit (FTP/SFTP)
|
||||||
"com.binarynights.ForkLift*" # ForkLift
|
"com.binarynights.ForkLift*" # ForkLift
|
||||||
"com.noodlesoft.Hazel" # Hazel
|
"com.noodlesoft.Hazel" # Hazel
|
||||||
@@ -391,9 +353,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.apple.Xcode.CloudDocuments" # Xcode Cloud Documents
|
"com.apple.Xcode.CloudDocuments" # Xcode Cloud Documents
|
||||||
"com.synology.*" # Synology apps
|
"com.synology.*" # Synology apps
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Cloud Storage & Backup (Issue #204)
|
# Cloud Storage & Backup (Issue #204)
|
||||||
# ============================================================================
|
|
||||||
"com.dropbox.*" # Dropbox
|
"com.dropbox.*" # Dropbox
|
||||||
"com.getdropbox.*" # Dropbox legacy
|
"com.getdropbox.*" # Dropbox legacy
|
||||||
"*dropbox*" # Dropbox helpers/updaters
|
"*dropbox*" # Dropbox helpers/updaters
|
||||||
@@ -420,9 +380,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.shirtpocket.*" # SuperDuper backup
|
"com.shirtpocket.*" # SuperDuper backup
|
||||||
"homebrew.mxcl.*" # Homebrew services
|
"homebrew.mxcl.*" # Homebrew services
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Screenshot & Recording
|
# Screenshot & Recording
|
||||||
# ============================================================================
|
|
||||||
"com.cleanshot.*" # CleanShot X
|
"com.cleanshot.*" # CleanShot X
|
||||||
"com.xnipapp.xnip" # Xnip
|
"com.xnipapp.xnip" # Xnip
|
||||||
"com.reincubate.camo" # Camo
|
"com.reincubate.camo" # Camo
|
||||||
@@ -436,9 +394,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"com.linebreak.CloudApp" # CloudApp
|
"com.linebreak.CloudApp" # CloudApp
|
||||||
"com.droplr.droplr-mac" # Droplr
|
"com.droplr.droplr-mac" # Droplr
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Media & Entertainment
|
# Media & Entertainment
|
||||||
# ============================================================================
|
|
||||||
"com.spotify.client" # Spotify
|
"com.spotify.client" # Spotify
|
||||||
"com.apple.Music" # Apple Music
|
"com.apple.Music" # Apple Music
|
||||||
"com.apple.podcasts" # Apple Podcasts
|
"com.apple.podcasts" # Apple Podcasts
|
||||||
@@ -456,9 +412,7 @@ readonly DATA_PROTECTED_BUNDLES=(
|
|||||||
"tv.plex.player.desktop" # Plex
|
"tv.plex.player.desktop" # Plex
|
||||||
"com.netease.163music" # NetEase Music
|
"com.netease.163music" # NetEase Music
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# License Management & App Stores
|
# License Management & App Stores
|
||||||
# ============================================================================
|
|
||||||
"com.paddle.Paddle*" # Paddle (license management)
|
"com.paddle.Paddle*" # Paddle (license management)
|
||||||
"com.setapp.DesktopClient" # Setapp
|
"com.setapp.DesktopClient" # Setapp
|
||||||
"com.devmate.*" # DevMate (license framework)
|
"com.devmate.*" # DevMate (license framework)
|
||||||
|
|||||||
@@ -1,26 +1,19 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# System Configuration Maintenance Module
|
# System Configuration Maintenance Module.
|
||||||
# Fix broken preferences and broken login items
|
# Fix broken preferences and login items.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# ============================================================================
|
# Remove corrupted preference files.
|
||||||
# Broken Preferences Detection and Cleanup
|
|
||||||
# Find and remove corrupted .plist files
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Clean corrupted preference files
|
|
||||||
fix_broken_preferences() {
|
fix_broken_preferences() {
|
||||||
local prefs_dir="$HOME/Library/Preferences"
|
local prefs_dir="$HOME/Library/Preferences"
|
||||||
[[ -d "$prefs_dir" ]] || return 0
|
[[ -d "$prefs_dir" ]] || return 0
|
||||||
|
|
||||||
local broken_count=0
|
local broken_count=0
|
||||||
|
|
||||||
# Check main preferences directory
|
|
||||||
while IFS= read -r plist_file; do
|
while IFS= read -r plist_file; do
|
||||||
[[ -f "$plist_file" ]] || continue
|
[[ -f "$plist_file" ]] || continue
|
||||||
|
|
||||||
# Skip system preferences
|
|
||||||
local filename
|
local filename
|
||||||
filename=$(basename "$plist_file")
|
filename=$(basename "$plist_file")
|
||||||
case "$filename" in
|
case "$filename" in
|
||||||
@@ -29,15 +22,13 @@ fix_broken_preferences() {
|
|||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# Validate plist using plutil
|
|
||||||
plutil -lint "$plist_file" > /dev/null 2>&1 && continue
|
plutil -lint "$plist_file" > /dev/null 2>&1 && continue
|
||||||
|
|
||||||
# Remove broken plist
|
|
||||||
safe_remove "$plist_file" true > /dev/null 2>&1 || true
|
safe_remove "$plist_file" true > /dev/null 2>&1 || true
|
||||||
((broken_count++))
|
((broken_count++))
|
||||||
done < <(command find "$prefs_dir" -maxdepth 1 -name "*.plist" -type f 2> /dev/null || true)
|
done < <(command find "$prefs_dir" -maxdepth 1 -name "*.plist" -type f 2> /dev/null || true)
|
||||||
|
|
||||||
# Check ByHost preferences with timeout protection
|
# Check ByHost preferences.
|
||||||
local byhost_dir="$prefs_dir/ByHost"
|
local byhost_dir="$prefs_dir/ByHost"
|
||||||
if [[ -d "$byhost_dir" ]]; then
|
if [[ -d "$byhost_dir" ]]; then
|
||||||
while IFS= read -r plist_file; do
|
while IFS= read -r plist_file; do
|
||||||
|
|||||||
@@ -3,16 +3,12 @@
|
|||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# Configuration constants
|
# Config constants (override via env).
|
||||||
# MOLE_TM_THIN_TIMEOUT: Max seconds to wait for tmutil thinning (default: 180)
|
|
||||||
# MOLE_TM_THIN_VALUE: Bytes to thin for local snapshots (default: 9999999999)
|
|
||||||
# MOLE_MAIL_DOWNLOADS_MIN_KB: Minimum size in KB before cleaning Mail attachments (default: 5120)
|
|
||||||
# MOLE_MAIL_AGE_DAYS: Minimum age in days for Mail attachments to be cleaned (default: 30)
|
|
||||||
readonly MOLE_TM_THIN_TIMEOUT=180
|
readonly MOLE_TM_THIN_TIMEOUT=180
|
||||||
readonly MOLE_TM_THIN_VALUE=9999999999
|
readonly MOLE_TM_THIN_VALUE=9999999999
|
||||||
readonly MOLE_SQLITE_MAX_SIZE=104857600 # 100MB
|
readonly MOLE_SQLITE_MAX_SIZE=104857600 # 100MB
|
||||||
|
|
||||||
# Helper function to get appropriate icon and color for dry-run mode
|
# Dry-run aware output.
|
||||||
opt_msg() {
|
opt_msg() {
|
||||||
local message="$1"
|
local message="$1"
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||||
@@ -92,7 +88,6 @@ is_memory_pressure_high() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
flush_dns_cache() {
|
flush_dns_cache() {
|
||||||
# Skip actual flush in dry-run mode
|
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" == "1" ]]; then
|
||||||
MOLE_DNS_FLUSHED=1
|
MOLE_DNS_FLUSHED=1
|
||||||
return 0
|
return 0
|
||||||
@@ -105,7 +100,7 @@ flush_dns_cache() {
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Rebuild databases and flush caches
|
# Basic system maintenance.
|
||||||
opt_system_maintenance() {
|
opt_system_maintenance() {
|
||||||
if flush_dns_cache; then
|
if flush_dns_cache; then
|
||||||
opt_msg "DNS cache flushed"
|
opt_msg "DNS cache flushed"
|
||||||
@@ -120,10 +115,8 @@ opt_system_maintenance() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Refresh Finder caches (QuickLook and icon services)
|
# Refresh Finder caches (QuickLook/icon services).
|
||||||
# Note: Safari caches are cleaned separately in clean/user.sh, so excluded here
|
|
||||||
opt_cache_refresh() {
|
opt_cache_refresh() {
|
||||||
# Skip qlmanage commands in dry-run mode
|
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
qlmanage -r cache > /dev/null 2>&1 || true
|
qlmanage -r cache > /dev/null 2>&1 || true
|
||||||
qlmanage -r > /dev/null 2>&1 || true
|
qlmanage -r > /dev/null 2>&1 || true
|
||||||
@@ -151,7 +144,7 @@ opt_cache_refresh() {
|
|||||||
|
|
||||||
# Removed: opt_radio_refresh - Interrupts active user connections (WiFi, Bluetooth), degrading UX
|
# Removed: opt_radio_refresh - Interrupts active user connections (WiFi, Bluetooth), degrading UX
|
||||||
|
|
||||||
# Saved state: remove OLD app saved states (7+ days)
|
# Old saved states cleanup.
|
||||||
opt_saved_state_cleanup() {
|
opt_saved_state_cleanup() {
|
||||||
local state_dir="$HOME/Library/Saved Application State"
|
local state_dir="$HOME/Library/Saved Application State"
|
||||||
|
|
||||||
@@ -193,7 +186,7 @@ opt_fix_broken_configs() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Network cache optimization
|
# DNS cache refresh.
|
||||||
opt_network_optimization() {
|
opt_network_optimization() {
|
||||||
if [[ "${MOLE_DNS_FLUSHED:-0}" == "1" ]]; then
|
if [[ "${MOLE_DNS_FLUSHED:-0}" == "1" ]]; then
|
||||||
opt_msg "DNS cache already refreshed"
|
opt_msg "DNS cache already refreshed"
|
||||||
@@ -209,8 +202,7 @@ opt_network_optimization() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# SQLite database vacuum optimization
|
# SQLite vacuum for Mail/Messages/Safari (safety checks applied).
|
||||||
# Compresses and optimizes SQLite databases for Mail, Messages, Safari
|
|
||||||
opt_sqlite_vacuum() {
|
opt_sqlite_vacuum() {
|
||||||
if ! command -v sqlite3 > /dev/null 2>&1; then
|
if ! command -v sqlite3 > /dev/null 2>&1; then
|
||||||
echo -e " ${GRAY}-${NC} Database optimization already optimal (sqlite3 unavailable)"
|
echo -e " ${GRAY}-${NC} Database optimization already optimal (sqlite3 unavailable)"
|
||||||
@@ -254,15 +246,13 @@ opt_sqlite_vacuum() {
|
|||||||
[[ ! -f "$db_file" ]] && continue
|
[[ ! -f "$db_file" ]] && continue
|
||||||
[[ "$db_file" == *"-wal" || "$db_file" == *"-shm" ]] && continue
|
[[ "$db_file" == *"-wal" || "$db_file" == *"-shm" ]] && continue
|
||||||
|
|
||||||
# Skip if protected
|
|
||||||
should_protect_path "$db_file" && continue
|
should_protect_path "$db_file" && continue
|
||||||
|
|
||||||
# Verify it's a SQLite database
|
|
||||||
if ! file "$db_file" 2> /dev/null | grep -q "SQLite"; then
|
if ! file "$db_file" 2> /dev/null | grep -q "SQLite"; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Safety check 1: Skip large databases (>100MB) to avoid timeouts
|
# Skip large DBs (>100MB).
|
||||||
local file_size
|
local file_size
|
||||||
file_size=$(get_file_size "$db_file")
|
file_size=$(get_file_size "$db_file")
|
||||||
if [[ "$file_size" -gt "$MOLE_SQLITE_MAX_SIZE" ]]; then
|
if [[ "$file_size" -gt "$MOLE_SQLITE_MAX_SIZE" ]]; then
|
||||||
@@ -270,7 +260,7 @@ opt_sqlite_vacuum() {
|
|||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Safety check 2: Skip if freelist is tiny (already compact)
|
# Skip if freelist is tiny (already compact).
|
||||||
local page_info=""
|
local page_info=""
|
||||||
page_info=$(run_with_timeout 5 sqlite3 "$db_file" "PRAGMA page_count; PRAGMA freelist_count;" 2> /dev/null || echo "")
|
page_info=$(run_with_timeout 5 sqlite3 "$db_file" "PRAGMA page_count; PRAGMA freelist_count;" 2> /dev/null || echo "")
|
||||||
local page_count=""
|
local page_count=""
|
||||||
@@ -284,7 +274,7 @@ opt_sqlite_vacuum() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Safety check 3: Verify database integrity before VACUUM
|
# Verify integrity before VACUUM.
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
local integrity_check=""
|
local integrity_check=""
|
||||||
set +e
|
set +e
|
||||||
@@ -292,14 +282,12 @@ opt_sqlite_vacuum() {
|
|||||||
local integrity_status=$?
|
local integrity_status=$?
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Skip if integrity check failed or database is corrupted
|
|
||||||
if [[ $integrity_status -ne 0 ]] || ! echo "$integrity_check" | grep -q "ok"; then
|
if [[ $integrity_status -ne 0 ]] || ! echo "$integrity_check" | grep -q "ok"; then
|
||||||
((skipped++))
|
((skipped++))
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Try to vacuum (skip in dry-run mode)
|
|
||||||
local exit_code=0
|
local exit_code=0
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
set +e
|
set +e
|
||||||
@@ -315,7 +303,6 @@ opt_sqlite_vacuum() {
|
|||||||
((failed++))
|
((failed++))
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# In dry-run mode, just count the database
|
|
||||||
((vacuumed++))
|
((vacuumed++))
|
||||||
fi
|
fi
|
||||||
done < <(compgen -G "$pattern" || true)
|
done < <(compgen -G "$pattern" || true)
|
||||||
@@ -346,8 +333,7 @@ opt_sqlite_vacuum() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# LaunchServices database rebuild
|
# LaunchServices rebuild ("Open with" issues).
|
||||||
# Fixes "Open with" menu issues, duplicate apps, broken file associations
|
|
||||||
opt_launch_services_rebuild() {
|
opt_launch_services_rebuild() {
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
start_inline_spinner ""
|
start_inline_spinner ""
|
||||||
@@ -358,7 +344,6 @@ opt_launch_services_rebuild() {
|
|||||||
if [[ -f "$lsregister" ]]; then
|
if [[ -f "$lsregister" ]]; then
|
||||||
local success=0
|
local success=0
|
||||||
|
|
||||||
# Skip actual rebuild in dry-run mode
|
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
set +e
|
set +e
|
||||||
"$lsregister" -r -domain local -domain user -domain system > /dev/null 2>&1
|
"$lsregister" -r -domain local -domain user -domain system > /dev/null 2>&1
|
||||||
@@ -369,7 +354,7 @@ opt_launch_services_rebuild() {
|
|||||||
fi
|
fi
|
||||||
set -e
|
set -e
|
||||||
else
|
else
|
||||||
success=0 # Assume success in dry-run mode
|
success=0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
@@ -390,18 +375,16 @@ opt_launch_services_rebuild() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Font cache rebuild
|
# Font cache rebuild.
|
||||||
# Fixes font rendering issues, missing fonts, and character display problems
|
|
||||||
opt_font_cache_rebuild() {
|
opt_font_cache_rebuild() {
|
||||||
local success=false
|
local success=false
|
||||||
|
|
||||||
# Skip actual font cache removal in dry-run mode
|
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
if sudo atsutil databases -remove > /dev/null 2>&1; then
|
if sudo atsutil databases -remove > /dev/null 2>&1; then
|
||||||
success=true
|
success=true
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
success=true # Assume success in dry-run mode
|
success=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "$success" == "true" ]]; then
|
if [[ "$success" == "true" ]]; then
|
||||||
@@ -417,8 +400,7 @@ opt_font_cache_rebuild() {
|
|||||||
# - opt_dyld_cache_update: Low benefit, time-consuming, auto-managed by macOS
|
# - opt_dyld_cache_update: Low benefit, time-consuming, auto-managed by macOS
|
||||||
# - opt_system_services_refresh: Risk of data loss when killing system services
|
# - opt_system_services_refresh: Risk of data loss when killing system services
|
||||||
|
|
||||||
# Memory pressure relief
|
# Memory pressure relief.
|
||||||
# Clears inactive memory and disk cache to improve system responsiveness
|
|
||||||
opt_memory_pressure_relief() {
|
opt_memory_pressure_relief() {
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
if ! is_memory_pressure_high; then
|
if ! is_memory_pressure_high; then
|
||||||
@@ -438,8 +420,7 @@ opt_memory_pressure_relief() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Network stack optimization
|
# Network stack reset (route + ARP).
|
||||||
# Flushes routing table and ARP cache to resolve network issues
|
|
||||||
opt_network_stack_optimize() {
|
opt_network_stack_optimize() {
|
||||||
local route_flushed="false"
|
local route_flushed="false"
|
||||||
local arp_flushed="false"
|
local arp_flushed="false"
|
||||||
@@ -460,12 +441,10 @@ opt_network_stack_optimize() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Flush routing table
|
|
||||||
if sudo route -n flush > /dev/null 2>&1; then
|
if sudo route -n flush > /dev/null 2>&1; then
|
||||||
route_flushed="true"
|
route_flushed="true"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Clear ARP cache
|
|
||||||
if sudo arp -a -d > /dev/null 2>&1; then
|
if sudo arp -a -d > /dev/null 2>&1; then
|
||||||
arp_flushed="true"
|
arp_flushed="true"
|
||||||
fi
|
fi
|
||||||
@@ -487,8 +466,7 @@ opt_network_stack_optimize() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Disk permissions repair
|
# User directory permissions repair.
|
||||||
# Fixes user home directory permission issues
|
|
||||||
opt_disk_permissions_repair() {
|
opt_disk_permissions_repair() {
|
||||||
local user_id
|
local user_id
|
||||||
user_id=$(id -u)
|
user_id=$(id -u)
|
||||||
@@ -524,11 +502,7 @@ opt_disk_permissions_repair() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Bluetooth module reset
|
# Bluetooth reset (skip if HID/audio active).
|
||||||
# Resets Bluetooth daemon to fix connectivity issues
|
|
||||||
# Intelligently detects Bluetooth audio usage:
|
|
||||||
# 1. Checks if default audio output is Bluetooth (precise)
|
|
||||||
# 2. Falls back to Bluetooth + media app detection (compatibility)
|
|
||||||
opt_bluetooth_reset() {
|
opt_bluetooth_reset() {
|
||||||
local spinner_started="false"
|
local spinner_started="false"
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
@@ -545,26 +519,20 @@ opt_bluetooth_reset() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if any audio is playing through Bluetooth
|
|
||||||
local bt_audio_active=false
|
local bt_audio_active=false
|
||||||
|
|
||||||
# Method 1: Check if default audio output is Bluetooth (precise)
|
|
||||||
local audio_info
|
local audio_info
|
||||||
audio_info=$(system_profiler SPAudioDataType 2> /dev/null || echo "")
|
audio_info=$(system_profiler SPAudioDataType 2> /dev/null || echo "")
|
||||||
|
|
||||||
# Extract default output device information
|
|
||||||
local default_output
|
local default_output
|
||||||
default_output=$(echo "$audio_info" | awk '/Default Output Device: Yes/,/^$/' 2> /dev/null || echo "")
|
default_output=$(echo "$audio_info" | awk '/Default Output Device: Yes/,/^$/' 2> /dev/null || echo "")
|
||||||
|
|
||||||
# Check if transport type is Bluetooth
|
|
||||||
if echo "$default_output" | grep -qi "Transport:.*Bluetooth"; then
|
if echo "$default_output" | grep -qi "Transport:.*Bluetooth"; then
|
||||||
bt_audio_active=true
|
bt_audio_active=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Method 2: Fallback - Bluetooth connected + media apps running (compatibility)
|
|
||||||
if [[ "$bt_audio_active" == "false" ]]; then
|
if [[ "$bt_audio_active" == "false" ]]; then
|
||||||
if system_profiler SPBluetoothDataType 2> /dev/null | grep -q "Connected: Yes"; then
|
if system_profiler SPBluetoothDataType 2> /dev/null | grep -q "Connected: Yes"; then
|
||||||
# Extended media apps list for broader coverage
|
|
||||||
local -a media_apps=("Music" "Spotify" "VLC" "QuickTime Player" "TV" "Podcasts" "Safari" "Google Chrome" "Chrome" "Firefox" "Arc" "IINA" "mpv")
|
local -a media_apps=("Music" "Spotify" "VLC" "QuickTime Player" "TV" "Podcasts" "Safari" "Google Chrome" "Chrome" "Firefox" "Arc" "IINA" "mpv")
|
||||||
for app in "${media_apps[@]}"; do
|
for app in "${media_apps[@]}"; do
|
||||||
if pgrep -x "$app" > /dev/null 2>&1; then
|
if pgrep -x "$app" > /dev/null 2>&1; then
|
||||||
@@ -583,7 +551,6 @@ opt_bluetooth_reset() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Safe to reset Bluetooth
|
|
||||||
if sudo pkill -TERM bluetoothd > /dev/null 2>&1; then
|
if sudo pkill -TERM bluetoothd > /dev/null 2>&1; then
|
||||||
sleep 1
|
sleep 1
|
||||||
if pgrep -x bluetoothd > /dev/null 2>&1; then
|
if pgrep -x bluetoothd > /dev/null 2>&1; then
|
||||||
@@ -609,11 +576,8 @@ opt_bluetooth_reset() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Spotlight index optimization
|
# Spotlight index check/rebuild (only if slow).
|
||||||
# Rebuilds Spotlight index if search is slow or results are inaccurate
|
|
||||||
# Only runs if index is actually problematic
|
|
||||||
opt_spotlight_index_optimize() {
|
opt_spotlight_index_optimize() {
|
||||||
# Check if Spotlight indexing is disabled
|
|
||||||
local spotlight_status
|
local spotlight_status
|
||||||
spotlight_status=$(mdutil -s / 2> /dev/null || echo "")
|
spotlight_status=$(mdutil -s / 2> /dev/null || echo "")
|
||||||
|
|
||||||
@@ -622,9 +586,7 @@ opt_spotlight_index_optimize() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if indexing is currently running
|
|
||||||
if echo "$spotlight_status" | grep -qi "Indexing enabled" && ! echo "$spotlight_status" | grep -qi "Indexing and searching disabled"; then
|
if echo "$spotlight_status" | grep -qi "Indexing enabled" && ! echo "$spotlight_status" | grep -qi "Indexing and searching disabled"; then
|
||||||
# Check index health by testing search speed twice
|
|
||||||
local slow_count=0
|
local slow_count=0
|
||||||
local test_start test_end test_duration
|
local test_start test_end test_duration
|
||||||
for _ in 1 2; do
|
for _ in 1 2; do
|
||||||
@@ -663,13 +625,11 @@ opt_spotlight_index_optimize() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Dock cache refresh
|
# Dock cache refresh.
|
||||||
# Fixes broken icons, duplicate items, and visual glitches in the Dock
|
|
||||||
opt_dock_refresh() {
|
opt_dock_refresh() {
|
||||||
local dock_support="$HOME/Library/Application Support/Dock"
|
local dock_support="$HOME/Library/Application Support/Dock"
|
||||||
local refreshed=false
|
local refreshed=false
|
||||||
|
|
||||||
# Remove Dock database files (icons, positions, etc.)
|
|
||||||
if [[ -d "$dock_support" ]]; then
|
if [[ -d "$dock_support" ]]; then
|
||||||
while IFS= read -r db_file; do
|
while IFS= read -r db_file; do
|
||||||
if [[ -f "$db_file" ]]; then
|
if [[ -f "$db_file" ]]; then
|
||||||
@@ -678,14 +638,11 @@ opt_dock_refresh() {
|
|||||||
done < <(find "$dock_support" -name "*.db" -type f 2> /dev/null || true)
|
done < <(find "$dock_support" -name "*.db" -type f 2> /dev/null || true)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Also clear Dock plist cache
|
|
||||||
local dock_plist="$HOME/Library/Preferences/com.apple.dock.plist"
|
local dock_plist="$HOME/Library/Preferences/com.apple.dock.plist"
|
||||||
if [[ -f "$dock_plist" ]]; then
|
if [[ -f "$dock_plist" ]]; then
|
||||||
# Just touch to invalidate cache, don't delete (preserves user settings)
|
|
||||||
touch "$dock_plist" 2> /dev/null || true
|
touch "$dock_plist" 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Restart Dock to apply changes (skip in dry-run mode)
|
|
||||||
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
if [[ "${MOLE_DRY_RUN:-0}" != "1" ]]; then
|
||||||
killall Dock 2> /dev/null || true
|
killall Dock 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
@@ -696,7 +653,7 @@ opt_dock_refresh() {
|
|||||||
opt_msg "Dock refreshed"
|
opt_msg "Dock refreshed"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Execute optimization by action name
|
# Dispatch optimization by action name.
|
||||||
execute_optimization() {
|
execute_optimization() {
|
||||||
local action="$1"
|
local action="$1"
|
||||||
local path="${2:-}"
|
local path="${2:-}"
|
||||||
|
|||||||
@@ -2,18 +2,13 @@
|
|||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# Ensure common.sh is loaded
|
# Ensure common.sh is loaded.
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||||
[[ -z "${MOLE_COMMON_LOADED:-}" ]] && source "$SCRIPT_DIR/lib/core/common.sh"
|
[[ -z "${MOLE_COMMON_LOADED:-}" ]] && source "$SCRIPT_DIR/lib/core/common.sh"
|
||||||
|
|
||||||
# Batch uninstall functionality with minimal confirmations
|
# Batch uninstall with a single confirmation.
|
||||||
# Replaces the overly verbose individual confirmation approach
|
|
||||||
|
|
||||||
# ============================================================================
|
# User data detection patterns (prompt user to backup if found).
|
||||||
# Configuration: User Data Detection Patterns
|
|
||||||
# ============================================================================
|
|
||||||
# Directories that typically contain user-customized configurations, themes,
|
|
||||||
# or personal data that users might want to backup before uninstalling
|
|
||||||
readonly SENSITIVE_DATA_PATTERNS=(
|
readonly SENSITIVE_DATA_PATTERNS=(
|
||||||
"\.warp" # Warp terminal configs/themes
|
"\.warp" # Warp terminal configs/themes
|
||||||
"/\.config/" # Standard Unix config directory
|
"/\.config/" # Standard Unix config directory
|
||||||
@@ -26,24 +21,20 @@ readonly SENSITIVE_DATA_PATTERNS=(
|
|||||||
"/\.gnupg/" # GPG keys (critical)
|
"/\.gnupg/" # GPG keys (critical)
|
||||||
)
|
)
|
||||||
|
|
||||||
# Join patterns into a single regex for grep
|
# Join patterns into a single regex for grep.
|
||||||
SENSITIVE_DATA_REGEX=$(
|
SENSITIVE_DATA_REGEX=$(
|
||||||
IFS='|'
|
IFS='|'
|
||||||
echo "${SENSITIVE_DATA_PATTERNS[*]}"
|
echo "${SENSITIVE_DATA_PATTERNS[*]}"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Decode and validate base64 encoded file list
|
# Decode and validate base64 file list (safe for set -e).
|
||||||
# Returns decoded string if valid, empty string otherwise
|
|
||||||
decode_file_list() {
|
decode_file_list() {
|
||||||
local encoded="$1"
|
local encoded="$1"
|
||||||
local app_name="$2"
|
local app_name="$2"
|
||||||
local decoded
|
local decoded
|
||||||
|
|
||||||
# Decode base64 data (macOS uses -D, GNU uses -d)
|
# macOS uses -D, GNU uses -d. Always return 0 for set -e safety.
|
||||||
# Try macOS format first, then GNU format for compatibility
|
|
||||||
# IMPORTANT: Always return 0 to prevent set -e from terminating the script
|
|
||||||
if ! decoded=$(printf '%s' "$encoded" | base64 -D 2> /dev/null); then
|
if ! decoded=$(printf '%s' "$encoded" | base64 -D 2> /dev/null); then
|
||||||
# Fallback to GNU base64 format
|
|
||||||
if ! decoded=$(printf '%s' "$encoded" | base64 -d 2> /dev/null); then
|
if ! decoded=$(printf '%s' "$encoded" | base64 -d 2> /dev/null); then
|
||||||
log_error "Failed to decode file list for $app_name" >&2
|
log_error "Failed to decode file list for $app_name" >&2
|
||||||
echo ""
|
echo ""
|
||||||
@@ -51,14 +42,12 @@ decode_file_list() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Validate decoded data doesn't contain null bytes
|
|
||||||
if [[ "$decoded" =~ $'\0' ]]; then
|
if [[ "$decoded" =~ $'\0' ]]; then
|
||||||
log_warning "File list for $app_name contains null bytes, rejecting" >&2
|
log_warning "File list for $app_name contains null bytes, rejecting" >&2
|
||||||
echo ""
|
echo ""
|
||||||
return 0 # Return success with empty string
|
return 0 # Return success with empty string
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Validate paths look reasonable (each line should be a path or empty)
|
|
||||||
while IFS= read -r line; do
|
while IFS= read -r line; do
|
||||||
if [[ -n "$line" && ! "$line" =~ ^/ ]]; then
|
if [[ -n "$line" && ! "$line" =~ ^/ ]]; then
|
||||||
log_warning "Invalid path in file list for $app_name: $line" >&2
|
log_warning "Invalid path in file list for $app_name: $line" >&2
|
||||||
@@ -70,24 +59,21 @@ decode_file_list() {
|
|||||||
echo "$decoded"
|
echo "$decoded"
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
# Note: find_app_files() and calculate_total_size() functions now in lib/core/common.sh
|
# Note: find_app_files() and calculate_total_size() are in lib/core/common.sh.
|
||||||
|
|
||||||
# Stop Launch Agents and Daemons for an app
|
# Stop Launch Agents/Daemons for an app.
|
||||||
# Args: $1 = bundle_id, $2 = has_system_files (true/false)
|
|
||||||
stop_launch_services() {
|
stop_launch_services() {
|
||||||
local bundle_id="$1"
|
local bundle_id="$1"
|
||||||
local has_system_files="${2:-false}"
|
local has_system_files="${2:-false}"
|
||||||
|
|
||||||
[[ -z "$bundle_id" || "$bundle_id" == "unknown" ]] && return 0
|
[[ -z "$bundle_id" || "$bundle_id" == "unknown" ]] && return 0
|
||||||
|
|
||||||
# User-level Launch Agents
|
|
||||||
if [[ -d ~/Library/LaunchAgents ]]; then
|
if [[ -d ~/Library/LaunchAgents ]]; then
|
||||||
while IFS= read -r -d '' plist; do
|
while IFS= read -r -d '' plist; do
|
||||||
launchctl unload "$plist" 2> /dev/null || true
|
launchctl unload "$plist" 2> /dev/null || true
|
||||||
done < <(find ~/Library/LaunchAgents -maxdepth 1 -name "${bundle_id}*.plist" -print0 2> /dev/null)
|
done < <(find ~/Library/LaunchAgents -maxdepth 1 -name "${bundle_id}*.plist" -print0 2> /dev/null)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# System-level services (requires sudo)
|
|
||||||
if [[ "$has_system_files" == "true" ]]; then
|
if [[ "$has_system_files" == "true" ]]; then
|
||||||
if [[ -d /Library/LaunchAgents ]]; then
|
if [[ -d /Library/LaunchAgents ]]; then
|
||||||
while IFS= read -r -d '' plist; do
|
while IFS= read -r -d '' plist; do
|
||||||
@@ -102,9 +88,7 @@ stop_launch_services() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Remove a list of files (handles both regular files and symlinks)
|
# Remove files (handles symlinks, optional sudo).
|
||||||
# Args: $1 = file_list (newline-separated), $2 = use_sudo (true/false)
|
|
||||||
# Returns: number of files removed
|
|
||||||
remove_file_list() {
|
remove_file_list() {
|
||||||
local file_list="$1"
|
local file_list="$1"
|
||||||
local use_sudo="${2:-false}"
|
local use_sudo="${2:-false}"
|
||||||
@@ -114,14 +98,12 @@ remove_file_list() {
|
|||||||
[[ -n "$file" && -e "$file" ]] || continue
|
[[ -n "$file" && -e "$file" ]] || continue
|
||||||
|
|
||||||
if [[ -L "$file" ]]; then
|
if [[ -L "$file" ]]; then
|
||||||
# Symlink: use direct rm
|
|
||||||
if [[ "$use_sudo" == "true" ]]; then
|
if [[ "$use_sudo" == "true" ]]; then
|
||||||
sudo rm "$file" 2> /dev/null && ((count++)) || true
|
sudo rm "$file" 2> /dev/null && ((count++)) || true
|
||||||
else
|
else
|
||||||
rm "$file" 2> /dev/null && ((count++)) || true
|
rm "$file" 2> /dev/null && ((count++)) || true
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Regular file/directory: use safe_remove
|
|
||||||
if [[ "$use_sudo" == "true" ]]; then
|
if [[ "$use_sudo" == "true" ]]; then
|
||||||
safe_sudo_remove "$file" && ((count++)) || true
|
safe_sudo_remove "$file" && ((count++)) || true
|
||||||
else
|
else
|
||||||
@@ -133,8 +115,7 @@ remove_file_list() {
|
|||||||
echo "$count"
|
echo "$count"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Batch uninstall with single confirmation
|
# Batch uninstall with single confirmation.
|
||||||
# Globals: selected_apps (read) - array of selected applications
|
|
||||||
batch_uninstall_applications() {
|
batch_uninstall_applications() {
|
||||||
local total_size_freed=0
|
local total_size_freed=0
|
||||||
|
|
||||||
@@ -144,19 +125,18 @@ batch_uninstall_applications() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Pre-process: Check for running apps and calculate total impact
|
# Pre-scan: running apps, sudo needs, size.
|
||||||
local -a running_apps=()
|
local -a running_apps=()
|
||||||
local -a sudo_apps=()
|
local -a sudo_apps=()
|
||||||
local total_estimated_size=0
|
local total_estimated_size=0
|
||||||
local -a app_details=()
|
local -a app_details=()
|
||||||
|
|
||||||
# Analyze selected apps with progress indicator
|
|
||||||
if [[ -t 1 ]]; then start_inline_spinner "Scanning files..."; fi
|
if [[ -t 1 ]]; then start_inline_spinner "Scanning files..."; fi
|
||||||
for selected_app in "${selected_apps[@]}"; do
|
for selected_app in "${selected_apps[@]}"; do
|
||||||
[[ -z "$selected_app" ]] && continue
|
[[ -z "$selected_app" ]] && continue
|
||||||
IFS='|' read -r _ app_path app_name bundle_id _ _ <<< "$selected_app"
|
IFS='|' read -r _ app_path app_name bundle_id _ _ <<< "$selected_app"
|
||||||
|
|
||||||
# Check if app is running using executable name from bundle
|
# Check running app by bundle executable if available.
|
||||||
local exec_name=""
|
local exec_name=""
|
||||||
if [[ -e "$app_path/Contents/Info.plist" ]]; then
|
if [[ -e "$app_path/Contents/Info.plist" ]]; then
|
||||||
exec_name=$(defaults read "$app_path/Contents/Info.plist" CFBundleExecutable 2> /dev/null || echo "")
|
exec_name=$(defaults read "$app_path/Contents/Info.plist" CFBundleExecutable 2> /dev/null || echo "")
|
||||||
@@ -166,11 +146,7 @@ batch_uninstall_applications() {
|
|||||||
running_apps+=("$app_name")
|
running_apps+=("$app_name")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if app requires sudo to delete (either app bundle or system files)
|
# Sudo needed if bundle owner/dir is not writable or system files exist.
|
||||||
# Need sudo if:
|
|
||||||
# 1. Parent directory is not writable (may be owned by another user or root)
|
|
||||||
# 2. App owner is root
|
|
||||||
# 3. App owner is different from current user
|
|
||||||
local needs_sudo=false
|
local needs_sudo=false
|
||||||
local app_owner=$(get_file_owner "$app_path")
|
local app_owner=$(get_file_owner "$app_path")
|
||||||
local current_user=$(whoami)
|
local current_user=$(whoami)
|
||||||
@@ -180,11 +156,11 @@ batch_uninstall_applications() {
|
|||||||
needs_sudo=true
|
needs_sudo=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Calculate size for summary (including system files)
|
# Size estimate includes related and system files.
|
||||||
local app_size_kb=$(get_path_size_kb "$app_path")
|
local app_size_kb=$(get_path_size_kb "$app_path")
|
||||||
local related_files=$(find_app_files "$bundle_id" "$app_name")
|
local related_files=$(find_app_files "$bundle_id" "$app_name")
|
||||||
local related_size_kb=$(calculate_total_size "$related_files")
|
local related_size_kb=$(calculate_total_size "$related_files")
|
||||||
# system_files is a newline-separated string, not an array
|
# system_files is a newline-separated string, not an array.
|
||||||
# shellcheck disable=SC2178,SC2128
|
# shellcheck disable=SC2178,SC2128
|
||||||
local system_files=$(find_app_system_files "$bundle_id" "$app_name")
|
local system_files=$(find_app_system_files "$bundle_id" "$app_name")
|
||||||
# shellcheck disable=SC2128
|
# shellcheck disable=SC2128
|
||||||
@@ -192,7 +168,6 @@ batch_uninstall_applications() {
|
|||||||
local total_kb=$((app_size_kb + related_size_kb + system_size_kb))
|
local total_kb=$((app_size_kb + related_size_kb + system_size_kb))
|
||||||
((total_estimated_size += total_kb))
|
((total_estimated_size += total_kb))
|
||||||
|
|
||||||
# Check if system files require sudo
|
|
||||||
# shellcheck disable=SC2128
|
# shellcheck disable=SC2128
|
||||||
if [[ -n "$system_files" ]]; then
|
if [[ -n "$system_files" ]]; then
|
||||||
needs_sudo=true
|
needs_sudo=true
|
||||||
@@ -202,33 +177,28 @@ batch_uninstall_applications() {
|
|||||||
sudo_apps+=("$app_name")
|
sudo_apps+=("$app_name")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check for sensitive user data (performance optimization: do this once)
|
# Check for sensitive user data once.
|
||||||
local has_sensitive_data="false"
|
local has_sensitive_data="false"
|
||||||
if [[ -n "$related_files" ]] && echo "$related_files" | grep -qE "$SENSITIVE_DATA_REGEX"; then
|
if [[ -n "$related_files" ]] && echo "$related_files" | grep -qE "$SENSITIVE_DATA_REGEX"; then
|
||||||
has_sensitive_data="true"
|
has_sensitive_data="true"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Store details for later use
|
# Store details for later use (base64 keeps lists on one line).
|
||||||
# Base64 encode file lists to handle multi-line data safely (single line)
|
|
||||||
local encoded_files
|
local encoded_files
|
||||||
encoded_files=$(printf '%s' "$related_files" | base64 | tr -d '\n')
|
encoded_files=$(printf '%s' "$related_files" | base64 | tr -d '\n')
|
||||||
local encoded_system_files
|
local encoded_system_files
|
||||||
encoded_system_files=$(printf '%s' "$system_files" | base64 | tr -d '\n')
|
encoded_system_files=$(printf '%s' "$system_files" | base64 | tr -d '\n')
|
||||||
# Store needs_sudo to avoid recalculating during deletion phase
|
|
||||||
app_details+=("$app_name|$app_path|$bundle_id|$total_kb|$encoded_files|$encoded_system_files|$has_sensitive_data|$needs_sudo")
|
app_details+=("$app_name|$app_path|$bundle_id|$total_kb|$encoded_files|$encoded_system_files|$has_sensitive_data|$needs_sudo")
|
||||||
done
|
done
|
||||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||||
|
|
||||||
# Format size display (convert KB to bytes for bytes_to_human())
|
|
||||||
local size_display=$(bytes_to_human "$((total_estimated_size * 1024))")
|
local size_display=$(bytes_to_human "$((total_estimated_size * 1024))")
|
||||||
|
|
||||||
# Display detailed file list for each app before confirmation
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${PURPLE_BOLD}Files to be removed:${NC}"
|
echo -e "${PURPLE_BOLD}Files to be removed:${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# Check for apps with user data that might need backup
|
# Warn if user data is detected.
|
||||||
# Performance optimization: use pre-calculated flags from app_details
|
|
||||||
local has_user_data=false
|
local has_user_data=false
|
||||||
for detail in "${app_details[@]}"; do
|
for detail in "${app_details[@]}"; do
|
||||||
IFS='|' read -r _ _ _ _ _ _ has_sensitive_data <<< "$detail"
|
IFS='|' read -r _ _ _ _ _ _ has_sensitive_data <<< "$detail"
|
||||||
@@ -252,7 +222,7 @@ batch_uninstall_applications() {
|
|||||||
echo -e "${BLUE}${ICON_CONFIRM}${NC} ${app_name} ${GRAY}(${app_size_display})${NC}"
|
echo -e "${BLUE}${ICON_CONFIRM}${NC} ${app_name} ${GRAY}(${app_size_display})${NC}"
|
||||||
echo -e " ${GREEN}${ICON_SUCCESS}${NC} ${app_path/$HOME/~}"
|
echo -e " ${GREEN}${ICON_SUCCESS}${NC} ${app_path/$HOME/~}"
|
||||||
|
|
||||||
# Show related files (limit to 5 most important ones for brevity)
|
# Show related files (limit to 5).
|
||||||
local file_count=0
|
local file_count=0
|
||||||
local max_files=5
|
local max_files=5
|
||||||
while IFS= read -r file; do
|
while IFS= read -r file; do
|
||||||
@@ -264,7 +234,7 @@ batch_uninstall_applications() {
|
|||||||
fi
|
fi
|
||||||
done <<< "$related_files"
|
done <<< "$related_files"
|
||||||
|
|
||||||
# Show system files
|
# Show system files (limit to 5).
|
||||||
local sys_file_count=0
|
local sys_file_count=0
|
||||||
while IFS= read -r file; do
|
while IFS= read -r file; do
|
||||||
if [[ -n "$file" && -e "$file" ]]; then
|
if [[ -n "$file" && -e "$file" ]]; then
|
||||||
@@ -275,7 +245,6 @@ batch_uninstall_applications() {
|
|||||||
fi
|
fi
|
||||||
done <<< "$system_files"
|
done <<< "$system_files"
|
||||||
|
|
||||||
# Show count of remaining files if truncated
|
|
||||||
local total_hidden=$((file_count > max_files ? file_count - max_files : 0))
|
local total_hidden=$((file_count > max_files ? file_count - max_files : 0))
|
||||||
((total_hidden += sys_file_count > max_files ? sys_file_count - max_files : 0))
|
((total_hidden += sys_file_count > max_files ? sys_file_count - max_files : 0))
|
||||||
if [[ $total_hidden -gt 0 ]]; then
|
if [[ $total_hidden -gt 0 ]]; then
|
||||||
@@ -283,7 +252,7 @@ batch_uninstall_applications() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
# Show summary and get batch confirmation first (before asking for password)
|
# Confirmation before requesting sudo.
|
||||||
local app_total=${#selected_apps[@]}
|
local app_total=${#selected_apps[@]}
|
||||||
local app_text="app"
|
local app_text="app"
|
||||||
[[ $app_total -gt 1 ]] && app_text="apps"
|
[[ $app_total -gt 1 ]] && app_text="apps"
|
||||||
@@ -315,9 +284,8 @@ batch_uninstall_applications() {
|
|||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# User confirmed, now request sudo access if needed
|
# Request sudo if needed.
|
||||||
if [[ ${#sudo_apps[@]} -gt 0 ]]; then
|
if [[ ${#sudo_apps[@]} -gt 0 ]]; then
|
||||||
# Check if sudo is already cached
|
|
||||||
if ! sudo -n true 2> /dev/null; then
|
if ! sudo -n true 2> /dev/null; then
|
||||||
if ! request_sudo_access "Admin required for system apps: ${sudo_apps[*]}"; then
|
if ! request_sudo_access "Admin required for system apps: ${sudo_apps[*]}"; then
|
||||||
echo ""
|
echo ""
|
||||||
@@ -325,10 +293,9 @@ batch_uninstall_applications() {
|
|||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
# Start sudo keepalive with robust parent checking
|
# Keep sudo alive during uninstall.
|
||||||
parent_pid=$$
|
parent_pid=$$
|
||||||
(while true; do
|
(while true; do
|
||||||
# Check if parent process still exists first
|
|
||||||
if ! kill -0 "$parent_pid" 2> /dev/null; then
|
if ! kill -0 "$parent_pid" 2> /dev/null; then
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
@@ -340,10 +307,7 @@ batch_uninstall_applications() {
|
|||||||
|
|
||||||
if [[ -t 1 ]]; then start_inline_spinner "Uninstalling apps..."; fi
|
if [[ -t 1 ]]; then start_inline_spinner "Uninstalling apps..."; fi
|
||||||
|
|
||||||
# Force quit running apps first (batch)
|
# Perform uninstallations (silent mode, show results at end).
|
||||||
# Note: Apps are already killed in the individual uninstall loop below with app_path for precise matching
|
|
||||||
|
|
||||||
# Perform uninstallations (silent mode, show results at end)
|
|
||||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||||
local success_count=0 failed_count=0
|
local success_count=0 failed_count=0
|
||||||
local -a failed_items=()
|
local -a failed_items=()
|
||||||
@@ -354,23 +318,19 @@ batch_uninstall_applications() {
|
|||||||
local system_files=$(decode_file_list "$encoded_system_files" "$app_name")
|
local system_files=$(decode_file_list "$encoded_system_files" "$app_name")
|
||||||
local reason=""
|
local reason=""
|
||||||
|
|
||||||
# Note: needs_sudo is already calculated during scanning phase (performance optimization)
|
# Stop Launch Agents/Daemons before removal.
|
||||||
|
|
||||||
# Stop Launch Agents and Daemons before removal
|
|
||||||
local has_system_files="false"
|
local has_system_files="false"
|
||||||
[[ -n "$system_files" ]] && has_system_files="true"
|
[[ -n "$system_files" ]] && has_system_files="true"
|
||||||
stop_launch_services "$bundle_id" "$has_system_files"
|
stop_launch_services "$bundle_id" "$has_system_files"
|
||||||
|
|
||||||
# Force quit app if still running
|
|
||||||
if ! force_kill_app "$app_name" "$app_path"; then
|
if ! force_kill_app "$app_name" "$app_path"; then
|
||||||
reason="still running"
|
reason="still running"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Remove the application only if not running
|
# Remove the application only if not running.
|
||||||
if [[ -z "$reason" ]]; then
|
if [[ -z "$reason" ]]; then
|
||||||
if [[ "$needs_sudo" == true ]]; then
|
if [[ "$needs_sudo" == true ]]; then
|
||||||
if ! safe_sudo_remove "$app_path"; then
|
if ! safe_sudo_remove "$app_path"; then
|
||||||
# Determine specific failure reason (only fetch owner info when needed)
|
|
||||||
local app_owner=$(get_file_owner "$app_path")
|
local app_owner=$(get_file_owner "$app_path")
|
||||||
local current_user=$(whoami)
|
local current_user=$(whoami)
|
||||||
if [[ -n "$app_owner" && "$app_owner" != "$current_user" && "$app_owner" != "root" ]]; then
|
if [[ -n "$app_owner" && "$app_owner" != "$current_user" && "$app_owner" != "root" ]]; then
|
||||||
@@ -384,25 +344,18 @@ batch_uninstall_applications() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Remove related files if app removal succeeded
|
# Remove related files if app removal succeeded.
|
||||||
if [[ -z "$reason" ]]; then
|
if [[ -z "$reason" ]]; then
|
||||||
# Remove user-level files
|
|
||||||
remove_file_list "$related_files" "false" > /dev/null
|
remove_file_list "$related_files" "false" > /dev/null
|
||||||
# Remove system-level files (requires sudo)
|
|
||||||
remove_file_list "$system_files" "true" > /dev/null
|
remove_file_list "$system_files" "true" > /dev/null
|
||||||
|
|
||||||
# Clean up macOS defaults (preference domain)
|
# Clean up macOS defaults (preference domains).
|
||||||
# This removes configuration data stored in the macOS defaults system
|
|
||||||
# Note: This complements plist file deletion by clearing cached preferences
|
|
||||||
if [[ -n "$bundle_id" && "$bundle_id" != "unknown" ]]; then
|
if [[ -n "$bundle_id" && "$bundle_id" != "unknown" ]]; then
|
||||||
# 1. Standard defaults domain cleanup
|
|
||||||
if defaults read "$bundle_id" &> /dev/null; then
|
if defaults read "$bundle_id" &> /dev/null; then
|
||||||
defaults delete "$bundle_id" 2> /dev/null || true
|
defaults delete "$bundle_id" 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# 2. Clean up ByHost preferences (machine-specific configs)
|
# ByHost preferences (machine-specific).
|
||||||
# These are often missed by standard cleanup tools
|
|
||||||
# Format: ~/Library/Preferences/ByHost/com.app.id.XXXX.plist
|
|
||||||
if [[ -d ~/Library/Preferences/ByHost ]]; then
|
if [[ -d ~/Library/Preferences/ByHost ]]; then
|
||||||
find ~/Library/Preferences/ByHost -maxdepth 1 -name "${bundle_id}.*.plist" -delete 2> /dev/null || true
|
find ~/Library/Preferences/ByHost -maxdepth 1 -name "${bundle_id}.*.plist" -delete 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
@@ -435,7 +388,7 @@ batch_uninstall_applications() {
|
|||||||
success_line+=", freed ${GREEN}${freed_display}${NC}"
|
success_line+=", freed ${GREEN}${freed_display}${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Format app list with max 3 per line
|
# Format app list with max 3 per line.
|
||||||
if [[ -n "$success_list" ]]; then
|
if [[ -n "$success_list" ]]; then
|
||||||
local idx=0
|
local idx=0
|
||||||
local is_first_line=true
|
local is_first_line=true
|
||||||
@@ -445,25 +398,20 @@ batch_uninstall_applications() {
|
|||||||
local display_item="${GREEN}${app_name}${NC}"
|
local display_item="${GREEN}${app_name}${NC}"
|
||||||
|
|
||||||
if ((idx % 3 == 0)); then
|
if ((idx % 3 == 0)); then
|
||||||
# Start new line
|
|
||||||
if [[ -n "$current_line" ]]; then
|
if [[ -n "$current_line" ]]; then
|
||||||
summary_details+=("$current_line")
|
summary_details+=("$current_line")
|
||||||
fi
|
fi
|
||||||
if [[ "$is_first_line" == true ]]; then
|
if [[ "$is_first_line" == true ]]; then
|
||||||
# First line: append to success_line
|
|
||||||
current_line="${success_line}: $display_item"
|
current_line="${success_line}: $display_item"
|
||||||
is_first_line=false
|
is_first_line=false
|
||||||
else
|
else
|
||||||
# Subsequent lines: just the apps
|
|
||||||
current_line="$display_item"
|
current_line="$display_item"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Add to current line
|
|
||||||
current_line="$current_line, $display_item"
|
current_line="$current_line, $display_item"
|
||||||
fi
|
fi
|
||||||
((idx++))
|
((idx++))
|
||||||
done
|
done
|
||||||
# Add the last line
|
|
||||||
if [[ -n "$current_line" ]]; then
|
if [[ -n "$current_line" ]]; then
|
||||||
summary_details+=("$current_line")
|
summary_details+=("$current_line")
|
||||||
fi
|
fi
|
||||||
@@ -509,12 +457,11 @@ batch_uninstall_applications() {
|
|||||||
print_summary_block "$title" "${summary_details[@]}"
|
print_summary_block "$title" "${summary_details[@]}"
|
||||||
printf '\n'
|
printf '\n'
|
||||||
|
|
||||||
# Clean up Dock entries for uninstalled apps
|
# Clean up Dock entries for uninstalled apps.
|
||||||
if [[ $success_count -gt 0 ]]; then
|
if [[ $success_count -gt 0 ]]; then
|
||||||
local -a removed_paths=()
|
local -a removed_paths=()
|
||||||
for detail in "${app_details[@]}"; do
|
for detail in "${app_details[@]}"; do
|
||||||
IFS='|' read -r app_name app_path _ _ _ _ <<< "$detail"
|
IFS='|' read -r app_name app_path _ _ _ _ <<< "$detail"
|
||||||
# Check if this app was successfully removed
|
|
||||||
for success_name in "${success_items[@]}"; do
|
for success_name in "${success_items[@]}"; do
|
||||||
if [[ "$success_name" == "$app_name" ]]; then
|
if [[ "$success_name" == "$app_name" ]]; then
|
||||||
removed_paths+=("$app_path")
|
removed_paths+=("$app_path")
|
||||||
@@ -527,14 +474,14 @@ batch_uninstall_applications() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Clean up sudo keepalive if it was started
|
# Clean up sudo keepalive if it was started.
|
||||||
if [[ -n "${sudo_keepalive_pid:-}" ]]; then
|
if [[ -n "${sudo_keepalive_pid:-}" ]]; then
|
||||||
kill "$sudo_keepalive_pid" 2> /dev/null || true
|
kill "$sudo_keepalive_pid" 2> /dev/null || true
|
||||||
wait "$sudo_keepalive_pid" 2> /dev/null || true
|
wait "$sudo_keepalive_pid" 2> /dev/null || true
|
||||||
sudo_keepalive_pid=""
|
sudo_keepalive_pid=""
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Invalidate cache if any apps were successfully uninstalled
|
# Invalidate cache if any apps were successfully uninstalled.
|
||||||
if [[ $success_count -gt 0 ]]; then
|
if [[ $success_count -gt 0 ]]; then
|
||||||
local cache_file="$HOME/.cache/mole/app_scan_cache"
|
local cache_file="$HOME/.cache/mole/app_scan_cache"
|
||||||
rm -f "$cache_file" 2> /dev/null || true
|
rm -f "$cache_file" 2> /dev/null || true
|
||||||
|
|||||||
100
mole
100
mole
@@ -1,83 +1,58 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Mole - Main Entry Point
|
# Mole - Main CLI entrypoint.
|
||||||
# A comprehensive macOS maintenance tool
|
# Routes subcommands and interactive menu.
|
||||||
#
|
# Handles update/remove flows.
|
||||||
# Clean - Remove junk files and optimize system
|
|
||||||
# Uninstall - Remove applications completely
|
|
||||||
# Analyze - Interactive disk space explorer
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./mole # Interactive main menu
|
|
||||||
# ./mole clean # Direct clean mode
|
|
||||||
# ./mole uninstall # Direct uninstall mode
|
|
||||||
# ./mole analyze # Disk space explorer
|
|
||||||
# ./mole --help # Show help
|
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# Get script directory
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
|
||||||
# Source common functions
|
|
||||||
source "$SCRIPT_DIR/lib/core/common.sh"
|
source "$SCRIPT_DIR/lib/core/common.sh"
|
||||||
|
|
||||||
# Set up cleanup trap for temporary files
|
|
||||||
trap cleanup_temp_files EXIT INT TERM
|
trap cleanup_temp_files EXIT INT TERM
|
||||||
|
|
||||||
# Version info
|
# Version and update helpers
|
||||||
VERSION="1.17.0"
|
VERSION="1.17.0"
|
||||||
MOLE_TAGLINE="Deep clean and optimize your Mac."
|
MOLE_TAGLINE="Deep clean and optimize your Mac."
|
||||||
|
|
||||||
# Check TouchID configuration
|
|
||||||
is_touchid_configured() {
|
is_touchid_configured() {
|
||||||
local pam_sudo_file="/etc/pam.d/sudo"
|
local pam_sudo_file="/etc/pam.d/sudo"
|
||||||
[[ -f "$pam_sudo_file" ]] && grep -q "pam_tid.so" "$pam_sudo_file" 2> /dev/null
|
[[ -f "$pam_sudo_file" ]] && grep -q "pam_tid.so" "$pam_sudo_file" 2> /dev/null
|
||||||
}
|
}
|
||||||
|
|
||||||
# Get latest version from remote repository
|
|
||||||
get_latest_version() {
|
get_latest_version() {
|
||||||
curl -fsSL --connect-timeout 2 --max-time 3 -H "Cache-Control: no-cache" \
|
curl -fsSL --connect-timeout 2 --max-time 3 -H "Cache-Control: no-cache" \
|
||||||
"https://raw.githubusercontent.com/tw93/mole/main/mole" 2> /dev/null |
|
"https://raw.githubusercontent.com/tw93/mole/main/mole" 2> /dev/null |
|
||||||
grep '^VERSION=' | head -1 | sed 's/VERSION="\(.*\)"/\1/'
|
grep '^VERSION=' | head -1 | sed 's/VERSION="\(.*\)"/\1/'
|
||||||
}
|
}
|
||||||
|
|
||||||
# Get latest version from GitHub API (works for both Homebrew and manual installations)
|
|
||||||
get_latest_version_from_github() {
|
get_latest_version_from_github() {
|
||||||
local version
|
local version
|
||||||
version=$(curl -fsSL --connect-timeout 2 --max-time 3 \
|
version=$(curl -fsSL --connect-timeout 2 --max-time 3 \
|
||||||
"https://api.github.com/repos/tw93/mole/releases/latest" 2> /dev/null |
|
"https://api.github.com/repos/tw93/mole/releases/latest" 2> /dev/null |
|
||||||
grep '"tag_name"' | head -1 | sed -E 's/.*"([^"]+)".*/\1/')
|
grep '"tag_name"' | head -1 | sed -E 's/.*"([^"]+)".*/\1/')
|
||||||
# Remove 'v' or 'V' prefix if present
|
|
||||||
version="${version#v}"
|
version="${version#v}"
|
||||||
version="${version#V}"
|
version="${version#V}"
|
||||||
echo "$version"
|
echo "$version"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Check if installed via Homebrew
|
# Install detection (Homebrew vs manual).
|
||||||
is_homebrew_install() {
|
is_homebrew_install() {
|
||||||
# Fast path: check if mole binary is a Homebrew symlink
|
|
||||||
local mole_path
|
local mole_path
|
||||||
mole_path=$(command -v mole 2> /dev/null) || return 1
|
mole_path=$(command -v mole 2> /dev/null) || return 1
|
||||||
|
|
||||||
# Check if mole is a symlink pointing to Homebrew Cellar
|
|
||||||
if [[ -L "$mole_path" ]] && readlink "$mole_path" | grep -q "Cellar/mole"; then
|
if [[ -L "$mole_path" ]] && readlink "$mole_path" | grep -q "Cellar/mole"; then
|
||||||
# Symlink looks good, but verify brew actually manages it
|
|
||||||
if command -v brew > /dev/null 2>&1; then
|
if command -v brew > /dev/null 2>&1; then
|
||||||
# Use fast brew list check
|
|
||||||
brew list --formula 2> /dev/null | grep -q "^mole$" && return 0
|
brew list --formula 2> /dev/null | grep -q "^mole$" && return 0
|
||||||
else
|
else
|
||||||
# brew not available - cannot update/remove via Homebrew
|
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Fallback: check common Homebrew paths and verify with Cellar
|
|
||||||
if [[ -f "$mole_path" ]]; then
|
if [[ -f "$mole_path" ]]; then
|
||||||
case "$mole_path" in
|
case "$mole_path" in
|
||||||
/opt/homebrew/bin/mole | /usr/local/bin/mole)
|
/opt/homebrew/bin/mole | /usr/local/bin/mole)
|
||||||
# Verify Cellar directory exists
|
|
||||||
if [[ -d /opt/homebrew/Cellar/mole ]] || [[ -d /usr/local/Cellar/mole ]]; then
|
if [[ -d /opt/homebrew/Cellar/mole ]] || [[ -d /usr/local/Cellar/mole ]]; then
|
||||||
# Double-check with brew if available
|
|
||||||
if command -v brew > /dev/null 2>&1; then
|
if command -v brew > /dev/null 2>&1; then
|
||||||
brew list --formula 2> /dev/null | grep -q "^mole$" && return 0
|
brew list --formula 2> /dev/null | grep -q "^mole$" && return 0
|
||||||
else
|
else
|
||||||
@@ -88,7 +63,6 @@ is_homebrew_install() {
|
|||||||
esac
|
esac
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Last resort: check custom Homebrew prefix
|
|
||||||
if command -v brew > /dev/null 2>&1; then
|
if command -v brew > /dev/null 2>&1; then
|
||||||
local brew_prefix
|
local brew_prefix
|
||||||
brew_prefix=$(brew --prefix 2> /dev/null)
|
brew_prefix=$(brew --prefix 2> /dev/null)
|
||||||
@@ -100,22 +74,17 @@ is_homebrew_install() {
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Check for updates (non-blocking, always check in background)
|
# Background update notice
|
||||||
check_for_updates() {
|
check_for_updates() {
|
||||||
local msg_cache="$HOME/.cache/mole/update_message"
|
local msg_cache="$HOME/.cache/mole/update_message"
|
||||||
ensure_user_dir "$(dirname "$msg_cache")"
|
ensure_user_dir "$(dirname "$msg_cache")"
|
||||||
ensure_user_file "$msg_cache"
|
ensure_user_file "$msg_cache"
|
||||||
|
|
||||||
# Background version check
|
|
||||||
# Always check in background, display result from previous check
|
|
||||||
(
|
(
|
||||||
local latest
|
local latest
|
||||||
|
|
||||||
# Use GitHub API for version check (works for both Homebrew and manual installs)
|
|
||||||
# Try API first (faster and more reliable)
|
|
||||||
latest=$(get_latest_version_from_github)
|
latest=$(get_latest_version_from_github)
|
||||||
if [[ -z "$latest" ]]; then
|
if [[ -z "$latest" ]]; then
|
||||||
# Fallback to parsing mole script from raw GitHub
|
|
||||||
latest=$(get_latest_version)
|
latest=$(get_latest_version)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -128,7 +97,6 @@ check_for_updates() {
|
|||||||
disown 2> /dev/null || true
|
disown 2> /dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Show update notification if available
|
|
||||||
show_update_notification() {
|
show_update_notification() {
|
||||||
local msg_cache="$HOME/.cache/mole/update_message"
|
local msg_cache="$HOME/.cache/mole/update_message"
|
||||||
if [[ -f "$msg_cache" && -s "$msg_cache" ]]; then
|
if [[ -f "$msg_cache" && -s "$msg_cache" ]]; then
|
||||||
@@ -137,6 +105,7 @@ show_update_notification() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# UI helpers
|
||||||
show_brand_banner() {
|
show_brand_banner() {
|
||||||
cat << EOF
|
cat << EOF
|
||||||
${GREEN} __ __ _ ${NC}
|
${GREEN} __ __ _ ${NC}
|
||||||
@@ -149,7 +118,6 @@ EOF
|
|||||||
}
|
}
|
||||||
|
|
||||||
animate_mole_intro() {
|
animate_mole_intro() {
|
||||||
# Non-interactive: skip animation
|
|
||||||
if [[ ! -t 1 ]]; then
|
if [[ ! -t 1 ]]; then
|
||||||
return
|
return
|
||||||
fi
|
fi
|
||||||
@@ -242,7 +210,6 @@ show_version() {
|
|||||||
local sip_status
|
local sip_status
|
||||||
if command -v csrutil > /dev/null; then
|
if command -v csrutil > /dev/null; then
|
||||||
sip_status=$(csrutil status 2> /dev/null | grep -o "enabled\|disabled" || echo "Unknown")
|
sip_status=$(csrutil status 2> /dev/null | grep -o "enabled\|disabled" || echo "Unknown")
|
||||||
# Capitalize first letter
|
|
||||||
sip_status="$(tr '[:lower:]' '[:upper:]' <<< "${sip_status:0:1}")${sip_status:1}"
|
sip_status="$(tr '[:lower:]' '[:upper:]' <<< "${sip_status:0:1}")${sip_status:1}"
|
||||||
else
|
else
|
||||||
sip_status="Unknown"
|
sip_status="Unknown"
|
||||||
@@ -295,22 +262,18 @@ show_help() {
|
|||||||
echo
|
echo
|
||||||
}
|
}
|
||||||
|
|
||||||
# Simple update function
|
# Update flow (Homebrew or installer).
|
||||||
update_mole() {
|
update_mole() {
|
||||||
# Set up cleanup trap for update process
|
|
||||||
local update_interrupted=false
|
local update_interrupted=false
|
||||||
trap 'update_interrupted=true; echo ""; exit 130' INT TERM
|
trap 'update_interrupted=true; echo ""; exit 130' INT TERM
|
||||||
|
|
||||||
# Check if installed via Homebrew
|
|
||||||
if is_homebrew_install; then
|
if is_homebrew_install; then
|
||||||
update_via_homebrew "$VERSION"
|
update_via_homebrew "$VERSION"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check for updates
|
|
||||||
local latest
|
local latest
|
||||||
latest=$(get_latest_version_from_github)
|
latest=$(get_latest_version_from_github)
|
||||||
# Fallback to raw GitHub if API fails
|
|
||||||
[[ -z "$latest" ]] && latest=$(get_latest_version)
|
[[ -z "$latest" ]] && latest=$(get_latest_version)
|
||||||
|
|
||||||
if [[ -z "$latest" ]]; then
|
if [[ -z "$latest" ]]; then
|
||||||
@@ -327,7 +290,6 @@ update_mole() {
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Download and run installer with progress
|
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
start_inline_spinner "Downloading latest version..."
|
start_inline_spinner "Downloading latest version..."
|
||||||
else
|
else
|
||||||
@@ -341,7 +303,6 @@ update_mole() {
|
|||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Download installer with progress and better error handling
|
|
||||||
local download_error=""
|
local download_error=""
|
||||||
if command -v curl > /dev/null 2>&1; then
|
if command -v curl > /dev/null 2>&1; then
|
||||||
download_error=$(curl -fsSL --connect-timeout 10 --max-time 60 "$installer_url" -o "$tmp_installer" 2>&1) || {
|
download_error=$(curl -fsSL --connect-timeout 10 --max-time 60 "$installer_url" -o "$tmp_installer" 2>&1) || {
|
||||||
@@ -350,7 +311,6 @@ update_mole() {
|
|||||||
rm -f "$tmp_installer"
|
rm -f "$tmp_installer"
|
||||||
log_error "Update failed (curl error: $curl_exit)"
|
log_error "Update failed (curl error: $curl_exit)"
|
||||||
|
|
||||||
# Provide helpful error messages based on curl exit codes
|
|
||||||
case $curl_exit in
|
case $curl_exit in
|
||||||
6) echo -e "${YELLOW}Tip:${NC} Could not resolve host. Check DNS or network connection." ;;
|
6) echo -e "${YELLOW}Tip:${NC} Could not resolve host. Check DNS or network connection." ;;
|
||||||
7) echo -e "${YELLOW}Tip:${NC} Failed to connect. Check network or proxy settings." ;;
|
7) echo -e "${YELLOW}Tip:${NC} Failed to connect. Check network or proxy settings." ;;
|
||||||
@@ -381,7 +341,6 @@ update_mole() {
|
|||||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||||
chmod +x "$tmp_installer"
|
chmod +x "$tmp_installer"
|
||||||
|
|
||||||
# Determine install directory
|
|
||||||
local mole_path
|
local mole_path
|
||||||
mole_path="$(command -v mole 2> /dev/null || echo "$0")"
|
mole_path="$(command -v mole 2> /dev/null || echo "$0")"
|
||||||
local install_dir
|
local install_dir
|
||||||
@@ -408,7 +367,6 @@ update_mole() {
|
|||||||
echo "Installing update..."
|
echo "Installing update..."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Helper function to process installer output
|
|
||||||
process_install_output() {
|
process_install_output() {
|
||||||
local output="$1"
|
local output="$1"
|
||||||
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
if [[ -t 1 ]]; then stop_inline_spinner; fi
|
||||||
@@ -419,7 +377,6 @@ update_mole() {
|
|||||||
printf '\n%s\n' "$filtered_output"
|
printf '\n%s\n' "$filtered_output"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Only show success message if installer didn't already do so
|
|
||||||
if ! printf '%s\n' "$output" | grep -Eq "Updated to latest version|Already on latest version"; then
|
if ! printf '%s\n' "$output" | grep -Eq "Updated to latest version|Already on latest version"; then
|
||||||
local new_version
|
local new_version
|
||||||
new_version=$("$mole_path" --version 2> /dev/null | awk 'NF {print $NF}' || echo "")
|
new_version=$("$mole_path" --version 2> /dev/null | awk 'NF {print $NF}' || echo "")
|
||||||
@@ -429,7 +386,6 @@ update_mole() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Run installer with visible output (but capture for error handling)
|
|
||||||
local install_output
|
local install_output
|
||||||
local update_tag="V${latest#V}"
|
local update_tag="V${latest#V}"
|
||||||
local config_dir="${MOLE_CONFIG_DIR:-$SCRIPT_DIR}"
|
local config_dir="${MOLE_CONFIG_DIR:-$SCRIPT_DIR}"
|
||||||
@@ -439,7 +395,6 @@ update_mole() {
|
|||||||
if install_output=$(MOLE_VERSION="$update_tag" "$tmp_installer" --prefix "$install_dir" --config "$config_dir" --update 2>&1); then
|
if install_output=$(MOLE_VERSION="$update_tag" "$tmp_installer" --prefix "$install_dir" --config "$config_dir" --update 2>&1); then
|
||||||
process_install_output "$install_output"
|
process_install_output "$install_output"
|
||||||
else
|
else
|
||||||
# Retry without --update flag
|
|
||||||
if install_output=$(MOLE_VERSION="$update_tag" "$tmp_installer" --prefix "$install_dir" --config "$config_dir" 2>&1); then
|
if install_output=$(MOLE_VERSION="$update_tag" "$tmp_installer" --prefix "$install_dir" --config "$config_dir" 2>&1); then
|
||||||
process_install_output "$install_output"
|
process_install_output "$install_output"
|
||||||
else
|
else
|
||||||
@@ -455,9 +410,8 @@ update_mole() {
|
|||||||
rm -f "$HOME/.cache/mole/update_message"
|
rm -f "$HOME/.cache/mole/update_message"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Remove Mole from system
|
# Remove flow (Homebrew + manual + config/cache).
|
||||||
remove_mole() {
|
remove_mole() {
|
||||||
# Detect all installations with loading
|
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
start_inline_spinner "Detecting Mole installations..."
|
start_inline_spinner "Detecting Mole installations..."
|
||||||
else
|
else
|
||||||
@@ -484,22 +438,18 @@ remove_mole() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check Homebrew
|
|
||||||
if [[ "$brew_has_mole" == "true" ]] || is_homebrew_install; then
|
if [[ "$brew_has_mole" == "true" ]] || is_homebrew_install; then
|
||||||
is_homebrew=true
|
is_homebrew=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Find mole installations using which/command
|
|
||||||
local found_mole
|
local found_mole
|
||||||
found_mole=$(command -v mole 2> /dev/null || true)
|
found_mole=$(command -v mole 2> /dev/null || true)
|
||||||
if [[ -n "$found_mole" && -f "$found_mole" ]]; then
|
if [[ -n "$found_mole" && -f "$found_mole" ]]; then
|
||||||
# Check if it's not a Homebrew symlink
|
|
||||||
if [[ ! -L "$found_mole" ]] || ! readlink "$found_mole" | grep -q "Cellar/mole"; then
|
if [[ ! -L "$found_mole" ]] || ! readlink "$found_mole" | grep -q "Cellar/mole"; then
|
||||||
manual_installs+=("$found_mole")
|
manual_installs+=("$found_mole")
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Also check common locations as fallback
|
|
||||||
local -a fallback_paths=(
|
local -a fallback_paths=(
|
||||||
"/usr/local/bin/mole"
|
"/usr/local/bin/mole"
|
||||||
"$HOME/.local/bin/mole"
|
"$HOME/.local/bin/mole"
|
||||||
@@ -508,21 +458,18 @@ remove_mole() {
|
|||||||
|
|
||||||
for path in "${fallback_paths[@]}"; do
|
for path in "${fallback_paths[@]}"; do
|
||||||
if [[ -f "$path" && "$path" != "$found_mole" ]]; then
|
if [[ -f "$path" && "$path" != "$found_mole" ]]; then
|
||||||
# Check if it's not a Homebrew symlink
|
|
||||||
if [[ ! -L "$path" ]] || ! readlink "$path" | grep -q "Cellar/mole"; then
|
if [[ ! -L "$path" ]] || ! readlink "$path" | grep -q "Cellar/mole"; then
|
||||||
manual_installs+=("$path")
|
manual_installs+=("$path")
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
# Find mo alias
|
|
||||||
local found_mo
|
local found_mo
|
||||||
found_mo=$(command -v mo 2> /dev/null || true)
|
found_mo=$(command -v mo 2> /dev/null || true)
|
||||||
if [[ -n "$found_mo" && -f "$found_mo" ]]; then
|
if [[ -n "$found_mo" && -f "$found_mo" ]]; then
|
||||||
alias_installs+=("$found_mo")
|
alias_installs+=("$found_mo")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Also check common locations for mo
|
|
||||||
local -a alias_fallback=(
|
local -a alias_fallback=(
|
||||||
"/usr/local/bin/mo"
|
"/usr/local/bin/mo"
|
||||||
"$HOME/.local/bin/mo"
|
"$HOME/.local/bin/mo"
|
||||||
@@ -541,7 +488,6 @@ remove_mole() {
|
|||||||
|
|
||||||
printf '\n'
|
printf '\n'
|
||||||
|
|
||||||
# Check if anything to remove
|
|
||||||
local manual_count=${#manual_installs[@]}
|
local manual_count=${#manual_installs[@]}
|
||||||
local alias_count=${#alias_installs[@]}
|
local alias_count=${#alias_installs[@]}
|
||||||
if [[ "$is_homebrew" == "false" && ${manual_count:-0} -eq 0 && ${alias_count:-0} -eq 0 ]]; then
|
if [[ "$is_homebrew" == "false" && ${manual_count:-0} -eq 0 && ${alias_count:-0} -eq 0 ]]; then
|
||||||
@@ -549,7 +495,6 @@ remove_mole() {
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# List items for removal
|
|
||||||
echo -e "${YELLOW}Remove Mole${NC} - will delete the following:"
|
echo -e "${YELLOW}Remove Mole${NC} - will delete the following:"
|
||||||
if [[ "$is_homebrew" == "true" ]]; then
|
if [[ "$is_homebrew" == "true" ]]; then
|
||||||
echo " - Mole via Homebrew"
|
echo " - Mole via Homebrew"
|
||||||
@@ -561,7 +506,6 @@ remove_mole() {
|
|||||||
echo " - ~/.cache/mole"
|
echo " - ~/.cache/mole"
|
||||||
echo -ne "${PURPLE}${ICON_ARROW}${NC} Press ${GREEN}Enter${NC} to confirm, ${GRAY}ESC${NC} to cancel: "
|
echo -ne "${PURPLE}${ICON_ARROW}${NC} Press ${GREEN}Enter${NC} to confirm, ${GRAY}ESC${NC} to cancel: "
|
||||||
|
|
||||||
# Read single key
|
|
||||||
IFS= read -r -s -n1 key || key=""
|
IFS= read -r -s -n1 key || key=""
|
||||||
drain_pending_input # Clean up any escape sequence remnants
|
drain_pending_input # Clean up any escape sequence remnants
|
||||||
case "$key" in
|
case "$key" in
|
||||||
@@ -570,14 +514,12 @@ remove_mole() {
|
|||||||
;;
|
;;
|
||||||
"" | $'\n' | $'\r')
|
"" | $'\n' | $'\r')
|
||||||
printf "\r\033[K" # Clear the prompt line
|
printf "\r\033[K" # Clear the prompt line
|
||||||
# Continue with removal
|
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
exit 0
|
exit 0
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# Remove Homebrew installation
|
|
||||||
local has_error=false
|
local has_error=false
|
||||||
if [[ "$is_homebrew" == "true" ]]; then
|
if [[ "$is_homebrew" == "true" ]]; then
|
||||||
if [[ -z "$brew_cmd" ]]; then
|
if [[ -z "$brew_cmd" ]]; then
|
||||||
@@ -598,18 +540,14 @@ remove_mole() {
|
|||||||
log_success "Mole uninstalled via Homebrew."
|
log_success "Mole uninstalled via Homebrew."
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
# Remove manual installations
|
|
||||||
if [[ ${manual_count:-0} -gt 0 ]]; then
|
if [[ ${manual_count:-0} -gt 0 ]]; then
|
||||||
for install in "${manual_installs[@]}"; do
|
for install in "${manual_installs[@]}"; do
|
||||||
if [[ -f "$install" ]]; then
|
if [[ -f "$install" ]]; then
|
||||||
# Check if directory requires sudo (deletion is a directory operation)
|
|
||||||
if [[ ! -w "$(dirname "$install")" ]]; then
|
if [[ ! -w "$(dirname "$install")" ]]; then
|
||||||
# Requires sudo
|
|
||||||
if ! sudo rm -f "$install" 2> /dev/null; then
|
if ! sudo rm -f "$install" 2> /dev/null; then
|
||||||
has_error=true
|
has_error=true
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Regular user permission
|
|
||||||
if ! rm -f "$install" 2> /dev/null; then
|
if ! rm -f "$install" 2> /dev/null; then
|
||||||
has_error=true
|
has_error=true
|
||||||
fi
|
fi
|
||||||
@@ -620,7 +558,6 @@ remove_mole() {
|
|||||||
if [[ ${alias_count:-0} -gt 0 ]]; then
|
if [[ ${alias_count:-0} -gt 0 ]]; then
|
||||||
for alias in "${alias_installs[@]}"; do
|
for alias in "${alias_installs[@]}"; do
|
||||||
if [[ -f "$alias" ]]; then
|
if [[ -f "$alias" ]]; then
|
||||||
# Check if directory requires sudo
|
|
||||||
if [[ ! -w "$(dirname "$alias")" ]]; then
|
if [[ ! -w "$(dirname "$alias")" ]]; then
|
||||||
if ! sudo rm -f "$alias" 2> /dev/null; then
|
if ! sudo rm -f "$alias" 2> /dev/null; then
|
||||||
has_error=true
|
has_error=true
|
||||||
@@ -633,16 +570,13 @@ remove_mole() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
# Clean up cache first (silent)
|
|
||||||
if [[ -d "$HOME/.cache/mole" ]]; then
|
if [[ -d "$HOME/.cache/mole" ]]; then
|
||||||
rm -rf "$HOME/.cache/mole" 2> /dev/null || true
|
rm -rf "$HOME/.cache/mole" 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
# Clean up configuration last (silent)
|
|
||||||
if [[ -d "$HOME/.config/mole" ]]; then
|
if [[ -d "$HOME/.config/mole" ]]; then
|
||||||
rm -rf "$HOME/.config/mole" 2> /dev/null || true
|
rm -rf "$HOME/.config/mole" 2> /dev/null || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Show final result
|
|
||||||
local final_message
|
local final_message
|
||||||
if [[ "$has_error" == "true" ]]; then
|
if [[ "$has_error" == "true" ]]; then
|
||||||
final_message="${YELLOW}${ICON_ERROR} Mole uninstalled with some errors, thank you for using Mole!${NC}"
|
final_message="${YELLOW}${ICON_ERROR} Mole uninstalled with some errors, thank you for using Mole!${NC}"
|
||||||
@@ -654,38 +588,33 @@ remove_mole() {
|
|||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
# Display main menu options with minimal refresh to avoid flicker
|
# Menu UI
|
||||||
show_main_menu() {
|
show_main_menu() {
|
||||||
local selected="${1:-1}"
|
local selected="${1:-1}"
|
||||||
local _full_draw="${2:-true}" # Kept for compatibility (unused)
|
local _full_draw="${2:-true}" # Kept for compatibility (unused)
|
||||||
local banner="${MAIN_MENU_BANNER:-}"
|
local banner="${MAIN_MENU_BANNER:-}"
|
||||||
local update_message="${MAIN_MENU_UPDATE_MESSAGE:-}"
|
local update_message="${MAIN_MENU_UPDATE_MESSAGE:-}"
|
||||||
|
|
||||||
# Fallback if globals missing (should not happen)
|
|
||||||
if [[ -z "$banner" ]]; then
|
if [[ -z "$banner" ]]; then
|
||||||
banner="$(show_brand_banner)"
|
banner="$(show_brand_banner)"
|
||||||
MAIN_MENU_BANNER="$banner"
|
MAIN_MENU_BANNER="$banner"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
printf '\033[H' # Move cursor to home
|
printf '\033[H'
|
||||||
|
|
||||||
local line=""
|
local line=""
|
||||||
# Leading spacer
|
|
||||||
printf '\r\033[2K\n'
|
printf '\r\033[2K\n'
|
||||||
|
|
||||||
# Brand banner
|
|
||||||
while IFS= read -r line || [[ -n "$line" ]]; do
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||||
printf '\r\033[2K%s\n' "$line"
|
printf '\r\033[2K%s\n' "$line"
|
||||||
done <<< "$banner"
|
done <<< "$banner"
|
||||||
|
|
||||||
# Update notification block (if present)
|
|
||||||
if [[ -n "$update_message" ]]; then
|
if [[ -n "$update_message" ]]; then
|
||||||
while IFS= read -r line || [[ -n "$line" ]]; do
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||||
printf '\r\033[2K%s\n' "$line"
|
printf '\r\033[2K%s\n' "$line"
|
||||||
done <<< "$update_message"
|
done <<< "$update_message"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Spacer before menu options
|
|
||||||
printf '\r\033[2K\n'
|
printf '\r\033[2K\n'
|
||||||
|
|
||||||
printf '\r\033[2K%s\n' "$(show_menu_option 1 "Clean Free up disk space" "$([[ $selected -eq 1 ]] && echo true || echo false)")"
|
printf '\r\033[2K%s\n' "$(show_menu_option 1 "Clean Free up disk space" "$([[ $selected -eq 1 ]] && echo true || echo false)")"
|
||||||
@@ -696,7 +625,6 @@ show_main_menu() {
|
|||||||
|
|
||||||
if [[ -t 0 ]]; then
|
if [[ -t 0 ]]; then
|
||||||
printf '\r\033[2K\n'
|
printf '\r\033[2K\n'
|
||||||
# Show TouchID if not configured, otherwise show Update
|
|
||||||
local controls="${GRAY}↑↓ | Enter | M More | "
|
local controls="${GRAY}↑↓ | Enter | M More | "
|
||||||
if ! is_touchid_configured; then
|
if ! is_touchid_configured; then
|
||||||
controls="${controls}T TouchID"
|
controls="${controls}T TouchID"
|
||||||
@@ -708,13 +636,10 @@ show_main_menu() {
|
|||||||
printf '\r\033[2K\n'
|
printf '\r\033[2K\n'
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Clear any remaining content below without full screen wipe
|
|
||||||
printf '\033[J'
|
printf '\033[J'
|
||||||
}
|
}
|
||||||
|
|
||||||
# Interactive main menu loop
|
|
||||||
interactive_main_menu() {
|
interactive_main_menu() {
|
||||||
# Show intro animation only once per terminal tab
|
|
||||||
if [[ -t 1 ]]; then
|
if [[ -t 1 ]]; then
|
||||||
local tty_name
|
local tty_name
|
||||||
tty_name=$(tty 2> /dev/null || echo "")
|
tty_name=$(tty 2> /dev/null || echo "")
|
||||||
@@ -820,13 +745,12 @@ interactive_main_menu() {
|
|||||||
"QUIT") cleanup_and_exit ;;
|
"QUIT") cleanup_and_exit ;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# Drain any accumulated input after processing (e.g., touchpad scroll events)
|
|
||||||
drain_pending_input
|
drain_pending_input
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# CLI dispatch
|
||||||
main() {
|
main() {
|
||||||
# Parse global flags
|
|
||||||
local -a args=()
|
local -a args=()
|
||||||
for arg in "$@"; do
|
for arg in "$@"; do
|
||||||
case "$arg" in
|
case "$arg" in
|
||||||
|
|||||||
@@ -1,51 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Build Universal Binary for analyze-go
|
|
||||||
# Supports both Apple Silicon and Intel Macs
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
cd "$(dirname "$0")/.."
|
|
||||||
|
|
||||||
# Check if Go is installed
|
|
||||||
if ! command -v go > /dev/null 2>&1; then
|
|
||||||
echo "Error: Go not installed"
|
|
||||||
echo "Install: brew install go"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Building analyze-go for multiple architectures..."
|
|
||||||
|
|
||||||
# Get version info
|
|
||||||
VERSION=$(git describe --tags --always --dirty 2> /dev/null || echo "dev")
|
|
||||||
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S')
|
|
||||||
LDFLAGS="-s -w -X main.Version=$VERSION -X main.BuildTime=$BUILD_TIME"
|
|
||||||
|
|
||||||
echo " Version: $VERSION"
|
|
||||||
echo " Build time: $BUILD_TIME"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Build for arm64 (Apple Silicon)
|
|
||||||
echo " → Building for arm64..."
|
|
||||||
GOARCH=arm64 go build -ldflags="$LDFLAGS" -trimpath -o bin/analyze-go-arm64 ./cmd/analyze
|
|
||||||
|
|
||||||
# Build for amd64 (Intel)
|
|
||||||
echo " → Building for amd64..."
|
|
||||||
GOARCH=amd64 go build -ldflags="$LDFLAGS" -trimpath -o bin/analyze-go-amd64 ./cmd/analyze
|
|
||||||
|
|
||||||
# Create Universal Binary
|
|
||||||
echo " → Creating Universal Binary..."
|
|
||||||
lipo -create bin/analyze-go-arm64 bin/analyze-go-amd64 -output bin/analyze-go
|
|
||||||
|
|
||||||
# Clean up temporary files
|
|
||||||
rm bin/analyze-go-arm64 bin/analyze-go-amd64
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
echo ""
|
|
||||||
echo "✓ Build complete!"
|
|
||||||
echo ""
|
|
||||||
file bin/analyze-go
|
|
||||||
size_bytes=$(/usr/bin/stat -f%z bin/analyze-go 2> /dev/null || echo 0)
|
|
||||||
size_mb=$((size_bytes / 1024 / 1024))
|
|
||||||
printf "Size: %d MB (%d bytes)\n" "$size_mb" "$size_bytes"
|
|
||||||
echo ""
|
|
||||||
echo "Binary supports: arm64 (Apple Silicon) + x86_64 (Intel)"
|
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Build Universal Binary for status-go
|
|
||||||
# Supports both Apple Silicon and Intel Macs
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
cd "$(dirname "$0")/.."
|
|
||||||
|
|
||||||
if ! command -v go > /dev/null 2>&1; then
|
|
||||||
echo "Error: Go not installed"
|
|
||||||
echo "Install: brew install go"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Building status-go for multiple architectures..."
|
|
||||||
|
|
||||||
VERSION=$(git describe --tags --always --dirty 2> /dev/null || echo "dev")
|
|
||||||
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S')
|
|
||||||
LDFLAGS="-s -w -X main.Version=$VERSION -X main.BuildTime=$BUILD_TIME"
|
|
||||||
|
|
||||||
echo " Version: $VERSION"
|
|
||||||
echo " Build time: $BUILD_TIME"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo " → Building for arm64..."
|
|
||||||
GOARCH=arm64 go build -ldflags="$LDFLAGS" -trimpath -o bin/status-go-arm64 ./cmd/status
|
|
||||||
|
|
||||||
echo " → Building for amd64..."
|
|
||||||
GOARCH=amd64 go build -ldflags="$LDFLAGS" -trimpath -o bin/status-go-amd64 ./cmd/status
|
|
||||||
|
|
||||||
echo " → Creating Universal Binary..."
|
|
||||||
lipo -create bin/status-go-arm64 bin/status-go-amd64 -output bin/status-go
|
|
||||||
|
|
||||||
rm bin/status-go-arm64 bin/status-go-amd64
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "✓ Build complete!"
|
|
||||||
echo ""
|
|
||||||
file bin/status-go
|
|
||||||
size_bytes=$(/usr/bin/stat -f%z bin/status-go 2> /dev/null || echo 0)
|
|
||||||
size_mb=$((size_bytes / 1024 / 1024))
|
|
||||||
printf "Size: %d MB (%d bytes)\n" "$size_mb" "$size_bytes"
|
|
||||||
echo ""
|
|
||||||
echo "Binary supports: arm64 (Apple Silicon) + x86_64 (Intel)"
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Pre-commit hook to ensure bin/analyze-go and bin/status-go are universal binaries
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# ANSI color codes
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Check if binaries are being added or modified (ignore deletions)
|
|
||||||
binaries=()
|
|
||||||
while read -r status path; do
|
|
||||||
case "$status" in
|
|
||||||
A|M)
|
|
||||||
if [[ "$path" == "bin/analyze-go" || "$path" == "bin/status-go" ]]; then
|
|
||||||
binaries+=("$path")
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done < <(git diff --cached --name-status)
|
|
||||||
|
|
||||||
# If no binaries are being committed, exit early
|
|
||||||
if [[ ${#binaries[@]} -eq 0 ]]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Checking compiled binaries...${NC}"
|
|
||||||
|
|
||||||
# Verify each binary is a universal binary
|
|
||||||
all_valid=true
|
|
||||||
for binary in "${binaries[@]}"; do
|
|
||||||
if [[ ! -f "$binary" ]]; then
|
|
||||||
echo -e "${RED}✗ $binary not found${NC}"
|
|
||||||
all_valid=false
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if it's a universal binary
|
|
||||||
if file "$binary" | grep -q "Mach-O universal binary"; then
|
|
||||||
# Verify it contains both x86_64 and arm64
|
|
||||||
if lipo -info "$binary" 2>/dev/null | grep -q "x86_64 arm64"; then
|
|
||||||
echo -e "${GREEN}✓ $binary is a universal binary (x86_64 + arm64)${NC}"
|
|
||||||
elif lipo -info "$binary" 2>/dev/null | grep -q "arm64 x86_64"; then
|
|
||||||
echo -e "${GREEN}✓ $binary is a universal binary (x86_64 + arm64)${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗ $binary is missing required architectures${NC}"
|
|
||||||
lipo -info "$binary"
|
|
||||||
all_valid=false
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗ $binary is not a universal binary${NC}"
|
|
||||||
file "$binary"
|
|
||||||
all_valid=false
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ "$all_valid" == "false" ]]; then
|
|
||||||
echo ""
|
|
||||||
echo -e "${RED}Commit rejected: binaries must be universal (x86_64 + arm64)${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "To create universal binaries, run:"
|
|
||||||
echo " ./scripts/build-analyze.sh"
|
|
||||||
echo " ./scripts/build-status.sh"
|
|
||||||
echo ""
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}All binaries verified!${NC}"
|
|
||||||
exit 0
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Install git hooks for Mole development
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
HOOKS_DIR="$REPO_ROOT/.git/hooks"
|
|
||||||
|
|
||||||
if [[ ! -d "$REPO_ROOT/.git" ]]; then
|
|
||||||
echo "Error: Not in a git repository"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Installing git hooks..."
|
|
||||||
|
|
||||||
# Install pre-commit hook
|
|
||||||
if [[ -f "$SCRIPT_DIR/hooks/pre-commit" ]]; then
|
|
||||||
cp "$SCRIPT_DIR/hooks/pre-commit" "$HOOKS_DIR/pre-commit"
|
|
||||||
chmod +x "$HOOKS_DIR/pre-commit"
|
|
||||||
echo "✓ Installed pre-commit hook (validates universal binaries)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Git hooks installed successfully!"
|
|
||||||
echo ""
|
|
||||||
echo "The pre-commit hook will ensure that bin/analyze-go and bin/status-go"
|
|
||||||
echo "are universal binaries (x86_64 + arm64) before allowing commits."
|
|
||||||
Reference in New Issue
Block a user