Skip to main content

10 Python Scripts That Automate My Everyday Tasks

Practical Python automation scripts for file organization, image resizing, web scraping, PDF merging, and more. Complete code included for each script.

Priya Patel
18 min read
10 Python Scripts That Automate My Everyday Tasks

Why I Started Automating Everything with Python

About two years ago, I realized I was spending an embarrassing amount of time on repetitive computer tasks. Renaming files. Resizing images for blog posts. Copying data from CSVs into spreadsheets. Sending the same kind of emails over and over. Each task took maybe five or ten minutes, but when you add them all up across a week? Hours. Gone. Poof.

So I did what any self-respecting developer would do -- I wrote Python scripts to handle the boring stuff. Some of these scripts save me 30 seconds. Others save me an hour every single week. The point is, once you write them, they just work. You run them, grab your chai, and come back to a finished task.

Here are the 10 scripts I use most often, with full code you can copy, modify, and start using today. I have kept every script self-contained so you can just drop it into a .py file and run it. Some need third-party libraries, and I have noted the pip install commands wherever necessary.


1. File Organizer by Extension

My Downloads folder used to look like a warzone. PDFs mixed with screenshots mixed with random .zip files from three months ago. This script scans a folder and moves each file into a subfolder based on its extension.

The Code

import os
import shutil
from pathlib import Path

def organize_folder(target_dir):
    """Organize files into subfolders based on their extension."""
    target_path = Path(target_dir)

    if not target_path.exists():
        print(f"Directory {target_dir} does not exist.")
        return

    extension_map = {
        'Images': ['.jpg', '.jpeg', '.png', '.gif', '.bmp', '.svg', '.webp'],
        'Documents': ['.pdf', '.doc', '.docx', '.txt', '.xlsx', '.xls', '.pptx', '.csv'],
        'Videos': ['.mp4', '.mkv', '.avi', '.mov', '.wmv', '.flv'],
        'Audio': ['.mp3', '.wav', '.flac', '.aac', '.ogg'],
        'Archives': ['.zip', '.rar', '.7z', '.tar', '.gz'],
        'Code': ['.py', '.js', '.ts', '.html', '.css', '.java', '.cpp', '.c'],
        'Installers': ['.exe', '.msi', '.dmg', '.deb', '.rpm'],
    }

    # Reverse map: extension -> folder name
    ext_to_folder = {}
    for folder, extensions in extension_map.items():
        for ext in extensions:
            ext_to_folder[ext] = folder

    moved_count = 0
    for item in target_path.iterdir():
        if item.is_file():
            ext = item.suffix.lower()
            folder_name = ext_to_folder.get(ext, 'Others')
            dest_folder = target_path / folder_name
            dest_folder.mkdir(exist_ok=True)

            dest_file = dest_folder / item.name
            if dest_file.exists():
                # Add a number suffix to avoid overwriting
                base = item.stem
                counter = 1
                while dest_file.exists():
                    dest_file = dest_folder / f"{base}_{counter}{ext}"
                    counter += 1

            shutil.move(str(item), str(dest_file))
            moved_count += 1
            print(f"Moved: {item.name} -> {folder_name}/")

    print(f"\nDone! Moved {moved_count} files.")

if __name__ == "__main__":
    folder = input("Enter the folder path to organize: ").strip()
    organize_folder(folder)

How I Use It

I have this set up as a scheduled task on my Windows machine that runs every Sunday evening. By Monday morning, my Downloads folder is pristine. You could also hook it up with cron on Linux or launchd on macOS. The extension map is easy to customize -- just add new categories or rearrange file types to match your workflow.


2. Bulk Image Resizer

When I am writing blog posts or preparing presentations, I often need to resize a batch of images to a specific width. Doing this one by one in an image editor is painful. This script handles an entire folder in seconds.

The Code

from PIL import Image
from pathlib import Path

def resize_images(input_dir, output_dir, max_width=800):
    """Resize all images in a folder to a maximum width, preserving aspect ratio."""
    input_path = Path(input_dir)
    output_path = Path(output_dir)
    output_path.mkdir(parents=True, exist_ok=True)

    supported = {'.jpg', '.jpeg', '.png', '.bmp', '.webp'}

    for img_file in input_path.iterdir():
        if img_file.suffix.lower() in supported:
            with Image.open(img_file) as img:
                if img.width > max_width:
                    ratio = max_width / img.width
                    new_height = int(img.height * ratio)
                    img_resized = img.resize((max_width, new_height), Image.LANCZOS)
                else:
                    img_resized = img.copy()

                output_file = output_path / img_file.name
                img_resized.save(output_file, quality=85, optimize=True)
                print(f"Resized: {img_file.name} ({img.width}x{img.height} -> {img_resized.width}x{img_resized.height})")

if __name__ == "__main__":
    src = input("Source folder: ").strip()
    dest = input("Output folder: ").strip()
    width = int(input("Max width (default 800): ").strip() or "800")
    resize_images(src, dest, width)

You will need Pillow for this one:

pip install Pillow

The quality=85 setting with optimize=True gives you a good balance between file size and visual quality. I have found that going below 80 starts to show noticeable compression artifacts, especially on photographs.


3. Automated Email Sender

I send a weekly status update email to my team every Friday. The format is almost identical each time -- just the bullet points change. This script uses Python's built-in smtplib to send HTML emails through Gmail.

The Code

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime

def send_email(sender, password, recipient, subject, body_html):
    """Send an HTML email via Gmail SMTP."""
    msg = MIMEMultipart('alternative')
    msg['From'] = sender
    msg['To'] = recipient
    msg['Subject'] = subject

    html_part = MIMEText(body_html, 'html')
    msg.attach(html_part)

    with smtplib.SMTP_SSL('smtp.gmail.com', 465) as server:
        server.login(sender, password)
        server.send_message(msg)
        print(f"Email sent to {recipient}")

if __name__ == "__main__":
    SENDER = "[email protected]"
    APP_PASSWORD = "your-16-char-app-password"  # Use Gmail App Password, NOT your regular password

    today = datetime.now().strftime("%B %d, %Y")

    updates = [
        "Completed the API refactoring for the payment module",
        "Fixed 3 critical bugs reported by QA team",
        "Started working on the dashboard redesign",
    ]

    bullet_points = "".join(f"<li>{item}</li>" for item in updates)

    html_body = f"""
    <html>
    <body style="font-family: Arial, sans-serif; line-height: 1.6;">
        <h2>Weekly Status Update - {today}</h2>
        <p>Hi Team,</p>
        <p>Here is my update for this week:</p>
        <ul>{bullet_points}</ul>
        <p>Best regards,<br>Priya</p>
    </body>
    </html>
    """

    send_email(SENDER, APP_PASSWORD, "[email protected]", f"Weekly Update - {today}", html_body)

Important: Never use your actual Gmail password here. Go to your Google Account settings, enable 2-Step Verification, and generate an App Password. That 16-character code is what goes into the script. Store it in an environment variable in production, obviously -- hardcoding credentials is a terrible idea for anything beyond a personal script.


4. Web Scraper for Price Tracking

I have been tracking the price of a few gadgets on Amazon and Flipkart for months using this script. It fetches the product page, extracts the price, and logs it to a CSV file. When the price drops below my target, it sends me an alert.

The Code

import requests
from bs4 import BeautifulSoup
import csv
from datetime import datetime
from pathlib import Path

def get_price(url, headers):
    """Scrape product price from a webpage."""
    response = requests.get(url, headers=headers)
    soup = BeautifulSoup(response.content, 'html.parser')

    # Amazon India price selector
    price_element = soup.select_one('span.a-price-whole')
    if price_element:
        price_text = price_element.get_text().replace(',', '').replace('.', '').strip()
        return int(price_text)

    # Flipkart price selector (fallback)
    price_element = soup.select_one('div._30jeq3')
    if price_element:
        price_text = price_element.get_text().replace('₹', '').replace(',', '').strip()
        return int(price_text)

    return None

def track_price(url, product_name, target_price):
    """Track price and log to CSV."""
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
                       '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
        'Accept-Language': 'en-IN,en;q=0.9',
    }

    price = get_price(url, headers)
    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M")

    if price:
        log_file = Path("price_log.csv")
        file_exists = log_file.exists()

        with open(log_file, 'a', newline='') as f:
            writer = csv.writer(f)
            if not file_exists:
                writer.writerow(['Timestamp', 'Product', 'Price (INR)', 'Target', 'Below Target'])
            writer.writerow([timestamp, product_name, price, target_price, price <= target_price])

        print(f"[{timestamp}] {product_name}: ₹{price:,}")

        if price <= target_price:
            print(f"  ALERT: Price is at or below your target of ₹{target_price:,}!")
            return True
    else:
        print(f"Could not fetch price for {product_name}")

    return False

if __name__ == "__main__":
    products = [
        {
            'url': 'https://www.amazon.in/dp/PRODUCT_ID',
            'name': 'Sony WH-1000XM5',
            'target': 22000
        },
    ]

    for product in products:
        track_price(product['url'], product['name'], product['target'])

You will need:

pip install requests beautifulsoup4

A word of caution: web scraping can break when websites change their HTML structure. I check this script every couple of weeks and update the CSS selectors if needed. Also, be respectful -- do not hit the same page hundreds of times per hour. I run this once a day through a cron job and that has been perfectly fine.


5. PDF Merger

Combining multiple PDF files into one is something I do surprisingly often -- invoices, scanned documents, project reports. This script merges any number of PDFs in the order you specify.

The Code

from PyPDF2 import PdfMerger
from pathlib import Path

def merge_pdfs(pdf_list, output_filename="merged_output.pdf"):
    """Merge multiple PDF files into one."""
    merger = PdfMerger()

    for pdf_path in pdf_list:
        path = Path(pdf_path)
        if path.exists() and path.suffix.lower() == '.pdf':
            merger.append(str(path))
            print(f"Added: {path.name}")
        else:
            print(f"Skipped (not found or not PDF): {pdf_path}")

    merger.write(output_filename)
    merger.close()
    print(f"\nMerged PDF saved as: {output_filename}")

def merge_folder(folder_path, output_filename="merged_output.pdf"):
    """Merge all PDFs in a folder, sorted by filename."""
    folder = Path(folder_path)
    pdf_files = sorted(folder.glob("*.pdf"))

    if not pdf_files:
        print("No PDF files found in the folder.")
        return

    print(f"Found {len(pdf_files)} PDF files:")
    for f in pdf_files:
        print(f"  - {f.name}")

    merge_pdfs([str(f) for f in pdf_files], output_filename)

if __name__ == "__main__":
    choice = input("Merge (1) specific files or (2) entire folder? ").strip()

    if choice == '1':
        files = input("Enter PDF file paths (comma-separated): ").strip().split(',')
        files = [f.strip() for f in files]
        output = input("Output filename (default: merged_output.pdf): ").strip() or "merged_output.pdf"
        merge_pdfs(files, output)
    else:
        folder = input("Enter folder path: ").strip()
        output = input("Output filename (default: merged_output.pdf): ").strip() or "merged_output.pdf"
        merge_folder(folder, output)

Install the dependency:

pip install PyPDF2

I use the merge_folder option most often. It picks up all PDFs in alphabetical order, which works great if you name your files with numeric prefixes like 01_intro.pdf, 02_chapter1.pdf, and so on.


6. CSV to Excel Converter with Formatting

Raw CSV files are ugly. When I need to share data with non-technical colleagues, I convert them to properly formatted Excel files with headers styled, column widths adjusted, and even some conditional formatting applied.

The Code

import pandas as pd
from openpyxl import load_workbook
from openpyxl.styles import Font, PatternFill, Alignment, Border, Side
from pathlib import Path

def csv_to_excel(csv_path, output_path=None):
    """Convert CSV to a formatted Excel file."""
    csv_file = Path(csv_path)
    if output_path is None:
        output_path = csv_file.with_suffix('.xlsx')

    # Read CSV
    df = pd.read_csv(csv_file)

    # Write to Excel
    df.to_excel(output_path, index=False, sheet_name='Data')

    # Now apply formatting
    wb = load_workbook(output_path)
    ws = wb['Data']

    # Header styling
    header_font = Font(bold=True, color="FFFFFF", size=11)
    header_fill = PatternFill(start_color="2563EB", end_color="2563EB", fill_type="solid")
    header_alignment = Alignment(horizontal="center", vertical="center")
    thin_border = Border(
        left=Side(style='thin'),
        right=Side(style='thin'),
        top=Side(style='thin'),
        bottom=Side(style='thin')
    )

    for cell in ws[1]:
        cell.font = header_font
        cell.fill = header_fill
        cell.alignment = header_alignment
        cell.border = thin_border

    # Auto-adjust column widths
    for column in ws.columns:
        max_length = 0
        column_letter = column[0].column_letter
        for cell in column:
            if cell.value:
                max_length = max(max_length, len(str(cell.value)))
            cell.border = thin_border
        adjusted_width = min(max_length + 4, 50)
        ws.column_dimensions[column_letter].width = adjusted_width

    # Alternate row coloring
    light_fill = PatternFill(start_color="F0F7FF", end_color="F0F7FF", fill_type="solid")
    for row_idx, row in enumerate(ws.iter_rows(min_row=2, max_row=ws.max_row), start=2):
        if row_idx % 2 == 0:
            for cell in row:
                cell.fill = light_fill

    wb.save(output_path)
    print(f"Formatted Excel file saved: {output_path}")

if __name__ == "__main__":
    csv_file = input("Enter CSV file path: ").strip()
    csv_to_excel(csv_file)

Dependencies:

pip install pandas openpyxl

The alternating row colors and auto-adjusted column widths make the output look surprisingly professional. My manager once asked me which tool I used to generate "those nice Excel reports" and I just smiled.


7. System Backup Script

This script creates compressed backups of important folders and keeps only the last N backups to avoid eating up disk space. Simple, reliable, and something everyone should have running.

The Code

import shutil
from pathlib import Path
from datetime import datetime

def create_backup(source_dirs, backup_root, max_backups=5):
    """Create a timestamped compressed backup of specified directories."""
    backup_root = Path(backup_root)
    backup_root.mkdir(parents=True, exist_ok=True)

    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")

    for source in source_dirs:
        source_path = Path(source)
        if not source_path.exists():
            print(f"Source not found, skipping: {source}")
            continue

        folder_name = source_path.name
        archive_name = f"{folder_name}_{timestamp}"
        archive_path = backup_root / archive_name

        print(f"Backing up: {source_path} ...")
        shutil.make_archive(
            str(archive_path),
            'zip',
            root_dir=str(source_path.parent),
            base_dir=source_path.name
        )

        archive_size = Path(f"{archive_path}.zip").stat().st_size / (1024 * 1024)
        print(f"  Created: {archive_name}.zip ({archive_size:.1f} MB)")

    # Cleanup old backups
    cleanup_old_backups(backup_root, max_backups)

def cleanup_old_backups(backup_root, max_backups):
    """Remove old backups, keeping only the most recent ones per source folder."""
    all_zips = sorted(backup_root.glob("*.zip"), key=lambda f: f.stat().st_mtime, reverse=True)

    # Group by source folder name (everything before the timestamp)
    from collections import defaultdict
    groups = defaultdict(list)
    for zip_file in all_zips:
        # Extract the folder name part (before _YYYYMMDD_HHMMSS)
        parts = zip_file.stem.rsplit('_', 2)
        if len(parts) >= 3:
            group_name = parts[0]
        else:
            group_name = zip_file.stem
        groups[group_name].append(zip_file)

    for group_name, files in groups.items():
        if len(files) > max_backups:
            for old_file in files[max_backups:]:
                old_file.unlink()
                print(f"  Removed old backup: {old_file.name}")

if __name__ == "__main__":
    folders_to_backup = [
        r"C:\Users\priya\Documents\Projects",
        r"C:\Users\priya\Documents\Notes",
        r"C:\Users\priya\Desktop\WorkFiles",
    ]

    backup_destination = r"D:\Backups"

    create_backup(folders_to_backup, backup_destination, max_backups=5)
    print("\nBackup complete!")

No external dependencies needed -- this uses only the Python standard library. I run this every night at 11 PM using Windows Task Scheduler. The max_backups=5 parameter means it keeps the five most recent backups for each folder and automatically deletes older ones. Set it and forget it.


8. WhatsApp Message Scheduler (Concept)

Now, I should be honest here -- automating WhatsApp is a gray area. WhatsApp does not officially support bots for personal accounts, and using unofficial APIs can get your number banned. But this concept script using pywhatkit is handy for sending one-off scheduled messages. Use it responsibly.

The Code

import pywhatkit as kit
from datetime import datetime, timedelta

def schedule_whatsapp_message(phone_number, message, send_time=None):
    """
    Schedule a WhatsApp message using pywhatkit.
    phone_number should include country code (e.g., +91XXXXXXXXXX)
    send_time is a datetime object; if None, sends 2 minutes from now.
    """
    if send_time is None:
        send_time = datetime.now() + timedelta(minutes=2)

    hour = send_time.hour
    minute = send_time.minute

    print(f"Scheduling message to {phone_number} at {hour:02d}:{minute:02d}")
    print(f"Message: {message}")

    kit.sendwhatmsg(
        phone_no=phone_number,
        message=message,
        time_hour=hour,
        time_min=minute,
        wait_time=15,
        tab_close=True
    )
    print("Message sent successfully!")

if __name__ == "__main__":
    # Example: Send birthday wishes at midnight
    schedule_whatsapp_message(
        phone_number="+919876543210",
        message="Happy Birthday! 🎂 Wishing you an amazing year ahead!",
        send_time=datetime(2026, 2, 15, 0, 0)  # Feb 15 at midnight
    )
pip install pywhatkit

This works by opening WhatsApp Web in your browser and typing the message automatically. It requires you to be logged into WhatsApp Web and your computer to be on at the scheduled time. It is not elegant, but it gets the job done for things like birthday wishes or reminder messages. For anything more serious, look into the WhatsApp Business API, which is the official and proper way to send automated messages.


9. Password Generator

I know password managers can generate passwords, but sometimes I need to generate a batch of strong passwords quickly -- for test accounts, temporary access credentials, or just to have a few stored offline. This script generates passwords with customizable rules.

The Code

import secrets
import string
from typing import Optional

def generate_password(
    length: int = 20,
    uppercase: bool = True,
    lowercase: bool = True,
    digits: bool = True,
    special: bool = True,
    exclude_ambiguous: bool = True,
    min_uppercase: int = 1,
    min_digits: int = 1,
    min_special: int = 1
) -> str:
    """Generate a cryptographically secure password."""

    chars = ""
    if lowercase:
        chars += string.ascii_lowercase
    if uppercase:
        chars += string.ascii_uppercase
    if digits:
        chars += string.digits
    if special:
        chars += "!@#$%^&*()-_=+[]{}|;:,.<>?"

    if exclude_ambiguous:
        ambiguous = "0OoIl1"
        chars = ''.join(c for c in chars if c not in ambiguous)

    if not chars:
        raise ValueError("At least one character type must be enabled")

    # Keep generating until all minimums are met
    while True:
        password = ''.join(secrets.choice(chars) for _ in range(length))

        has_upper = sum(1 for c in password if c in string.ascii_uppercase) >= min_uppercase
        has_digit = sum(1 for c in password if c in string.digits) >= min_digits
        has_special = sum(1 for c in password if c in "!@#$%^&*()-_=+[]{}|;:,.<>?") >= min_special

        if has_upper and has_digit and has_special:
            return password

def generate_batch(count: int = 10, **kwargs) -> list:
    """Generate multiple passwords."""
    return [generate_password(**kwargs) for _ in range(count)]

if __name__ == "__main__":
    print("=== Password Generator ===\n")

    count = int(input("How many passwords? (default 5): ").strip() or "5")
    length = int(input("Password length? (default 20): ").strip() or "20")

    passwords = generate_batch(count=count, length=length)

    print(f"\nGenerated {count} passwords ({length} chars each):\n")
    for i, pwd in enumerate(passwords, 1):
        print(f"  {i}. {pwd}")

    # Calculate entropy
    import math
    charset_size = 72  # approximate usable charset after ambiguous removal
    entropy = length * math.log2(charset_size)
    print(f"\n  Estimated entropy: {entropy:.0f} bits")
    print(f"  (128+ bits is considered very strong)")

This uses Python's secrets module instead of random, which is critical. The random module is not cryptographically secure -- it uses a predictable pseudorandom number generator. The secrets module uses your operating system's cryptographic random source, making the passwords truly unpredictable. The exclude_ambiguous flag removes characters like 0/O and 1/l/I that look similar in many fonts, which is a lifesaver when you need to read passwords aloud or type them manually.


10. Auto-Downloader for YouTube Playlists

I often download educational playlists for offline viewing during travel. This script uses yt-dlp (the maintained fork of youtube-dl) to download entire playlists with sensible defaults.

The Code

import subprocess
import sys
from pathlib import Path

def download_playlist(url, output_dir="downloads", quality="best", audio_only=False):
    """Download a YouTube playlist using yt-dlp."""
    output_path = Path(output_dir)
    output_path.mkdir(parents=True, exist_ok=True)

    # Output template: PlaylistName/01 - VideoTitle.ext
    output_template = str(output_path / "%(playlist_title)s" / "%(playlist_index)02d - %(title)s.%(ext)s")

    cmd = [
        sys.executable, "-m", "yt_dlp",
        "--output", output_template,
        "--no-overwrites",
        "--add-metadata",
        "--embed-thumbnail",
        "--progress",
    ]

    if audio_only:
        cmd.extend([
            "--extract-audio",
            "--audio-format", "mp3",
            "--audio-quality", "0",
        ])
    else:
        if quality == "best":
            cmd.extend(["-f", "bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best"])
        elif quality == "720p":
            cmd.extend(["-f", "bestvideo[height<=720][ext=mp4]+bestaudio[ext=m4a]/best[height<=720]"])
        elif quality == "480p":
            cmd.extend(["-f", "bestvideo[height<=480][ext=mp4]+bestaudio[ext=m4a]/best[height<=480]"])

    cmd.append(url)

    print(f"Downloading {'audio' if audio_only else 'video'} from: {url}")
    print(f"Output directory: {output_path.absolute()}")
    print(f"Quality: {'MP3 audio' if audio_only else quality}\n")

    try:
        subprocess.run(cmd, check=True)
        print("\nDownload complete!")
    except subprocess.CalledProcessError as e:
        print(f"\nDownload failed with error code: {e.returncode}")
    except FileNotFoundError:
        print("yt-dlp not found. Install it with: pip install yt-dlp")

if __name__ == "__main__":
    url = input("Enter YouTube playlist or video URL: ").strip()

    print("\nQuality options:")
    print("  1. Best available")
    print("  2. 720p")
    print("  3. 480p")
    print("  4. Audio only (MP3)")

    choice = input("\nChoice (1-4, default 1): ").strip() or "1"

    quality_map = {"1": "best", "2": "720p", "3": "480p"}
    audio_only = choice == "4"
    quality = quality_map.get(choice, "best")

    output = input("Output directory (default: downloads): ").strip() or "downloads"

    download_playlist(url, output, quality, audio_only)
pip install yt-dlp

The output template creates a neat folder structure: each playlist gets its own folder, and videos are numbered with their playlist index. The --no-overwrites flag means you can run the script again if it gets interrupted, and it will skip files that already exist. The --embed-thumbnail flag adds the video thumbnail to the file metadata, which makes the files look nice in file managers and media players.

Legal note: Only download content that you have the right to download. Educational content creators on YouTube sometimes offer offline viewing through YouTube Premium as well, which is the officially supported way. Use this responsibly.


Bonus: How I Organize All These Scripts

I keep all my automation scripts in a single folder called ~/scripts/ with a simple naming convention:

ScriptFilenameSchedule
File Organizerorganize_downloads.pyWeekly (Sunday 8 PM)
Image Resizerresize_images.pyOn demand
Email Senderweekly_update.pyWeekly (Friday 5 PM)
Price Trackertrack_prices.pyDaily (9 AM)
PDF Mergermerge_pdfs.pyOn demand
CSV to Excelcsv_to_xlsx.pyOn demand
System Backupbackup.pyDaily (11 PM)
WhatsApp Schedulerwa_schedule.pyOn demand
Password Generatorgen_passwords.pyOn demand
Auto-Downloaderdownload_playlist.pyOn demand

For scripts that run on a schedule, I use Windows Task Scheduler (since my daily driver is Windows) or cron on my Linux server. The key is to route all output to a log file so you can check if something went wrong:

# Example cron entry (Linux)
0 9 * * * /usr/bin/python3 /home/priya/scripts/track_prices.py >> /home/priya/scripts/logs/price_tracker.log 2>&1

A Few Tips for Writing Automation Scripts

Keep them simple. The best automation scripts are the ones you can read six months later and immediately understand. Resist the urge to over-engineer.

Add error handling. A script that crashes silently is worse than no script at all. At minimum, wrap your main logic in a try-except block and log the error somewhere.

Use pathlib instead of os.path. The pathlib module is cleaner, more readable, and more Pythonic. There is almost no reason to use os.path.join() anymore when you can write Path(folder) / "subfolder" / "file.txt".

Store secrets properly. Use environment variables or a .env file (with python-dotenv) for API keys, passwords, and other sensitive data. Never hardcode them into your scripts, even if "it is just for personal use." Habits matter.

Version control your scripts. I keep my scripts folder in a private Git repository. It is saved me more than once when I accidentally broke something during an edit.


Wrapping Up

Automation is one of those skills where the investment pays off exponentially. The file organizer script took me maybe 30 minutes to write, and it has saved me countless hours over the past two years. The price tracker has helped me snag deals I would have completely missed otherwise.

You do not need to be an advanced Python developer to write scripts like these. Most of them use straightforward logic -- loops, conditionals, file operations -- combined with a few well-chosen libraries. The Python ecosystem is ridiculously rich, and there is a library for almost anything you want to automate.

My advice? Start with whatever task annoys you the most. The one where you think, "Ugh, I have to do this again?" That is your first automation candidate. Write the script, run it a few times to make sure it works, and then schedule it or keep it handy. Once you experience the satisfaction of watching a computer do your tedious work for you, you will start seeing automation opportunities everywhere.

If you have questions about any of these scripts or want me to cover more automation ideas, drop a comment below. I am especially interested in hearing what repetitive tasks you have automated -- there might be a great script idea I have not thought of yet.

Share

Priya Patel

Senior Tech Writer

Covers AI, machine learning, and emerging technologies. Previously at TechCrunch India.

Comments (0)

Leave a Comment

Related Articles