10 Linux commands that will transform your development workflow

Aug 15, 2025

This article is part of the series:

What separates an average developer from an expert isn’t the IDE or the language—it’s how well they master their environment.

If you’ve followed the Linux series so far, you already know the basic commands and understand how permissions work. Now it’s time to level up: these 10 commands will transform your productivity and turn you into that developer who solves complex problems with just a few lines in the terminal.

This isn’t about memorizing syntax—it’s about changing how you think. Each command we’ll explore solves real day-to-day problems: finding lost files, processing logs, automating repetitive tasks, or debugging applications. These are the tools that separate those who “survive” in Linux from those who truly master it.

1. find — The file detective

What does it do?

Locates files and directories based on specific criteria. Much more powerful than any GUI search tool.

Essential use cases


     # Find all JavaScript files modified in the last 24 hours
     find . -name "*.js" -mtime -1

     # Search for config files containing 'docker' in the name
     find /etc -name "*docker*" 2>/dev/null

     # Find large files (>100MB) taking up space
     find . -size +100M -type f

     # Locate files with specific permissions (potential security issues)
     find . -perm 777 -type f

     # Find and delete node_modules folders to free up space
     find . -name "node_modules" -type d -exec rm -rf {} +
     

Important find parameters

  • . — Current directory (starting point)
  • -name "pattern" — Search by filename
  • -mtime -1 — Modified in the last day (-1 = less than 1 day)
  • -size +100M — Files larger than 100MB (+ = greater than)
  • -type f/d — Type: f=file, d=directory
  • -perm 777 — Files with specific permissions
  • -exec command {} + — Execute command on found results
  • 2>/dev/null — Redirect errors to avoid noise

Why it’s crucial

In large projects, manually searching for files is impractical. find lets you locate exactly what you need using complex criteria.

2. grep — The content excavator

What does it do?

Searches for text patterns within files. Like Ctrl+F but infinitely more powerful.

Essential use cases


     # Find all functions containing 'async' in TypeScript files
     grep -r "async" --include="*.ts" .

     # Search for errors in logs excluding warnings
     grep -E "ERROR|FATAL" /var/log/app.log | grep -v "WARNING"

     # Find environment variables used in your project
     grep -r "process\.env\." --include="*.js" src/

     # Locate all TODOs in the code
     grep -rn "TODO\|FIXME\|HACK" src/

     # Search for IPs in configuration files
     grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" config/
    

Key grep parameters

  • -r — Recursive search in subdirectories
  • -E — Extended regular expressions (allows |, +, ?)
  • -v — Inverts the search (excludes matching lines)
  • -n — Shows line numbers
  • -l — Only shows filenames containing matches
  • -i — Ignores case
  • --include="pattern" — Only searches files matching the pattern
  • \| — Logical OR in patterns (escaped for bash)

Powerful combination with find


     # Find Python files that import a specific library
     find . -name "*.py" -exec grep -l "import pandas" {} \;

     # Search for database configurations in the project
     find . -name "*.yml" -o -name "*.yaml" | xargs grep -i "database"
     

3. awk — The data processing powerhouse

What does it do?

Processes and manipulates structured text. Ideal for extracting specific information from logs, CSVs, or command outputs.

Essential use cases


     # Extract only IPs from an Apache log
     awk '{print $1}' /var/log/apache2/access.log | sort | uniq -c

     # Calculate total space used by file type
     ls -la | awk '{total += $5} END {print "Total:", total/1024/1024, "MB"}'

     # Process a CSV and extract specific columns
     awk -F',' '{print $1, $3}' data.csv

     # Find processes consuming more memory
     ps aux | awk '$4 > 5.0 {print $2, $4, $11}'

     # Analyze logs by HTTP status codes
     awk '{print $9}' access.log | sort | uniq -c | sort -nr
     

Essential awk syntax

  • $1, $2, $3... — Columns/fields (separated by spaces by default)
  • -F',' — Defines field separator (here: comma for CSV)
  • {print $1} — Action: print first column
  • $4 > 5.0 — Condition: filter lines where column 4 > 5.0
  • END {command} — Execute command at end of processing
  • total += $5 — Sum values from column 5
  • substr($1,1,13) — Extract substring: from position 1, 13 characters

Real example: Analyze application logs


     # Count errors per hour in timestamped logs
     awk '/ERROR/ {print substr($1,1,13)}' app.log | uniq -c
    

4. sed — The text editing ninja

What does it do?

Modifies text on the fly without opening files. Perfect for massive refactoring or data cleanup.

Essential use cases


     # Replace API URLs in all configuration files
     sed -i 's/api\.dev\.com/api\.prod\.com/g' config/*.json

     # Extract only lines between two patterns
     sed -n '/START/,/END/p' log.txt

     # Remove empty lines and comments
     sed '/^#/d; /^$/d' config.conf

     # Add a prefix to each line
     sed 's/^/LOG: /' error.txt

     # Convert date format
     echo "2025-08-17" | sed 's/-/\//g'
    

Basic sed commands

  • -i — In-place editing, modifies the original file
  • s/pattern/replacement/g — Substitute: s=substitute, g=global (all occurrences)
  • -n — Quiet mode, only prints what explicitly told
  • /START/,/END/p — Print lines from START pattern to END
  • /^#/d — Delete lines starting with # (d=delete)
  • /^$/d — Delete empty lines (^$ = line start and end with no content)
  • s/^/prefix/ — Add text at the beginning of each line (^ = line start)

Practical example: Massive refactoring


     # Change old imports to new ones throughout the project
     find src/ -name "*.js" -exec sed -i 's/old-library/new-library/g' {} \;
    

5. xargs — The command multiplier

What does it do?

Converts output from one command into arguments for another. It’s the glue that connects commands elegantly.

Essential use cases


     # Find and delete backup files
     find . -name "*.bak" | xargs rm

     # Search text in multiple found files
     find . -name "*.log" | xargs grep "ERROR"

     # Download multiple URLs from a file
     cat urls.txt | xargs -I {} curl -O {}

     # Change permissions on multiple files
     ls *.sh | xargs chmod +x

     # Process files in parallel (4 simultaneous jobs)
     find . -name "*.jpg" | xargs -P 4 -I {} convert {} {}.webp
    

Useful xargs options

  • -I {} — Defines placeholder for each input element
  • -P 4 — Execute up to 4 processes in parallel
  • -n 1 — Process one argument per command (useful for batches)
  • -d '\n' — Define custom delimiter (default: spaces and newlines)
  • -r — Don’t run command if no input (avoids errors)

6. curl — The universal communicator

What does it do?

Makes HTTP requests from the terminal. Essential for testing APIs, downloading files, or automating web interactions.

curl-format.txt file for performance measurement:

Create this curl-format.txt file to format the data received in the request below:


     total_time:        %{time_total}s
     dns_time:          %{time_namelookup}s
     connect_time:      %{time_connect}s
     status_code:       %{http_code}
     

Essential use cases


     # Test an API with JSON data
     curl -X POST -H "Content-Type: application/json" \
          -d '{"user":"test","pass":"123"}' \
          https://api.example.com/login

     # Download files with progress bar
     curl -L -o file.zip https://github.com/user/repo/archive/main.zip

     # Test API response time
     curl -w "@curl-format.txt" -o /dev/null -s https://api.example.com/health

     # Follow redirects and save cookies
     curl -L -c cookies.txt -b cookies.txt https://site.com/login

     # Verify SSL certificates
     curl -I https://your-site.com
    

Essential curl parameters

  • -X POST/GET/PUT/DELETE — Specify HTTP method
  • -H "Header: value" — Add custom headers
  • -d "data" — Send data in request body
  • -L — Follow redirects automatically
  • -o file — Save response to file
  • -O — Save with original filename from server
  • -s — Silent mode (no progress)
  • -w "format" — Define custom output format
  • -c file — Save cookies to file
  • -b file — Use cookies from file
  • -I — Headers only (HEAD method)

7. jq — The definitive JSON processor

What does it do?

Manipulates and extracts JSON data elegantly. Indispensable in the REST API era.

Installation


     sudo apt install jq  # Ubuntu/Debian
     brew install jq      # macOS
    

Essential use cases


     # Extract specific field from API response
     curl -s https://api.github.com/users/octocat | jq '.name'

     # Filter arrays by conditions
     jq '.[] | select(.age > 18)' users.json

     # Transform data structure
     jq '.users[] | {name: .name, email: .email}' data.json

     # Count elements in an array
     jq '.items | length' response.json

     # Search in nested structures
     jq '.data.users[] | select(.status == "active") | .email' api_response.json
    

Real example: Monitor deployment


     # Check status of services in Kubernetes
     kubectl get pods -o json | jq '.items[] | select(.status.phase != "Running") | .metadata.name'
    

8. ss — The connection monitor

What does it do?

Shows active network connections. Replaces the obsolete netstat with better performance.

Essential use cases


     # Check what process is using a specific port
     ss -tulpn | grep :3000

     # List all active TCP connections
     ss -t -a

     # Show connections by state
     ss -t state established

     # Find connections to a specific IP
     ss -t dst 192.168.1.100

     # Monitor listening ports
     ss -tlnp | grep LISTEN
    

Main ss options

  • -t — TCP connections only
  • -u — UDP connections only
  • -l — Listening ports only
  • -a — Show all connections (active + listening)
  • -n — Show port numbers instead of service names
  • -p — Show processes using each connection
  • state established — Filter by connection state
  • dst IP — Filter by destination IP
  • src IP — Filter by source IP

9. rsync — The intelligent synchronizer

What does it do?

Synchronizes files and directories efficiently. Only transfers changes, not complete files.

Essential use cases


     # Sync local project with remote server
     rsync -avz --exclude node_modules/ ./ user@server:/var/www/app/

     # Incremental backup with progress
     rsync -av --progress backup/ destination/

     # Bidirectional sync (use with caution)
     rsync -av --delete source/ destination/

     # Copy only recently modified files
     rsync -av --update source/ destination/

     # Dry-run to see what will be synced
     rsync -avn source/ destination/
    

Important rsync flags

  • -a — Archive mode (preserves permissions, times, links)
  • -v — Verbose (shows processed files)
  • -z — Compress during transfer (useful for slow connections)
  • -n — Dry-run (simulates without making real changes)
  • --progress — Show progress bar
  • --delete — Delete files in destination that don’t exist in source
  • --update — Only update newer files
  • --exclude pattern — Exclude files/directories matching pattern
  • Note: Trailing slash (/) matters in paths (affects whether it copies directory or contents)

10. tmux — The terminal multiplexer

What does it do?

Creates persistent terminal sessions. You can disconnect and reconnect without losing your work.

Installation and basic configuration


     sudo apt install tmux  # Ubuntu/Debian

     # Basic configuration in ~/.tmux.conf
     set -g prefix C-a
     bind-key C-a send-prefix
     unbind C-b
     set -g mouse on
    

Essential use cases


     # Create named session (short equivalent: tmux new -s development)
     tmux new-session -s development

     # List active sessions (alias: tmux ls)
     tmux list-sessions

     # Detach without closing anything: Prefix + d
     #  (Prefix = Ctrl+a because we redefined it; if you didn't change config it's Ctrl+b)

     # Reconnect to existing session
     tmux attach-session -t development

     # Close pane/window/session cleanly
     #   Pane: exit (exits shell) or Prefix + x (confirm)
     #   Window: Prefix + & (confirm)
     #   Session: tmux kill-session -t development

Basic step-by-step workflow (minimum you need)

Assuming you applied the prefix change to Ctrl+a in ~/.tmux.conf. If not, replace “Ctrl+a” with “Ctrl+b”.


     # 1. Create a new work session
     tmux new -s web

     # 2. Create a new window (e.g., for the server)
     #    Press: Ctrl+a  c

     # 3. Rename current window
     #    Press: Ctrl+a  ,   (type: server  and Enter)

     # 4. Switch to previous or next window
     #    Next: Ctrl+a  n
     #    Previous:  Ctrl+a  p

     # 5. Go directly to a window by number (windows start at 0)
     #    Example: Ctrl+a  0   (goes to first) / Ctrl+a  1

     # 6. Split window into panes
     #    Vertical (left/right): Ctrl+a  %
     #    Horizontal (top/bottom):    Ctrl+a  "

     # 7. Move between panes
     #    Ctrl+a  (arrows)  or  Ctrl+a  o (rotate)

     # 8. Resize (quick optional)
     #    Hold Ctrl+a then repeatedly press: Alt + ←/→/↑/↓  (depending on terminal)

     # 9. Detach from session (leave everything running)
     #    Ctrl+a  d

     # 10. Return later
     tmux attach -t web

     # 11. Close everything when done
     #    Exit each pane with: exit
     #    Or kill the entire session:
     tmux kill-session -t web

Mental mini-summary

  • New window: Prefix + c
  • Switch window: Prefix + n / p / number
  • Split vertical: Prefix + %
  • Split horizontal: Prefix + "
  • Move around: Prefix + arrows (or Prefix + o)
  • Detach: Prefix + d
  • Close pane: Prefix + x (confirm) or exit

With this, you can work with multiple processes (editor, server, logs) without opening dozens of separate terminals.

Extra. man - Your best ally

Never memorize parameters. The system has integrated documentation waiting for you. The man (manual) command is your inseparable companion:


     # Complete command documentation
     man find

     # Quick search within the manual (once inside)
     /pattern  # Search forward
     ?pattern  # Search backward
     n         # Next match
     q         # Exit manual

     # Search commands by functionality
     man -k "search files"  # Find commands related to searching

     # Short version with main options
     find --help  # Works with most commands

     # Quick example with explanation
     tldr find  # If you have tldr installed (apt install tldr)
    

Practical examples to get started


     # Forgot how to use grep with regular expressions?
     man grep
     # Search within manual: /regex

     # Don't remember rsync options?
     rsync --help | grep -E "^\s*-[a-z]"  # Only main options

     # What exactly does the -z flag do in tar?
     man tar
     # Search within: /-z
    

Your learning workflow

  1. Experiment → Use the basic command
  2. Documentman command when you need more options
  3. Practice → Combine with other commands
  4. Automate → Create aliases and scripts

Powerful combinations

Pipeline for log analysis


     # Find unique errors in logs, sorted by frequency
     find /var/log -name "*.log" | \
     xargs grep -h "ERROR" | \
     awk '{for(i=3;i<=NF;i++) printf "%s ", $i; print ""}' | \
     sort | uniq -c | sort -nr | head -10
    

Real-time application monitoring


     # Combine tail, grep and awk for live monitoring
     tail -f app.log | grep --line-buffered "ERROR\|WARN" | \
     awk '{print strftime("%H:%M:%S"), $0}' | \
     while read line; do echo -e "\033[31m$line\033[0m"; done
    

Automatic project cleanup


     # Script to clean unnecessary files
     find . -name "node_modules" -type d | xargs rm -rf
     find . -name "*.log" -mtime +7 | xargs rm
     find . -name ".DS_Store" | xargs rm
    

Tips for mastery

1. Create aliases for frequent commands


     # Add to ~/.bashrc or ~/.zshrc
     alias logwatch='tail -f /var/log/app.log | grep --color=always ERROR'
     alias findjs='find . -name "*.js" -not -path "./node_modules/*"'
     alias gitclean='git branch --merged | grep -v "\*\|main\|develop" | xargs git branch -d'
    

2. Use history to improve


     # Search previous commands with pattern
     history | grep "find.*\.js"
    

3. Combine with scripts


     #!/bin/bash
     # deploy.sh - Simple deployment script
     echo "🚀 Starting deployment..."
     rsync -av --exclude node_modules/ ./ server:/app/
     ssh server "cd /app && npm install && pm2 restart all"
     echo "✅ Deployment completed"
    

Conclusion

These 10 commands aren’t just tools; they’re superpowers that will transform your experience as a Linux developer. The difference between knowing them superficially and mastering them lies in constant practice.

True Linux mastery doesn’t come from memorizing flags and options, but from understanding how to combine these tools to solve real problems. Each command is a piece of a larger puzzle: your optimized workflow.

Remember: the terminal isn’t intimidating when you have documentation just a man away.

Ready to become that developer who works magic in the terminal?

Happy Coding!

comments powered by Disqus

Related posts

That may interest you