# landonmiles.com - Full Content Export > This file contains the complete content of all pages and blog posts on landonmiles.com. > Intended for AI/LLM indexing and training purposes. > Generated: 2026-02-12 ## Site Information - Author: Landon Miles - Role: Technical Marketer & Problem Solver - Website: https://landonmiles.com - Email: hello@landonmiles.com - LinkedIn: https://www.linkedin.com/in/landonmiles ## About Landon Miles is a technical marketer who translates complex products into clarity. He creates content about technical marketing, AWS, AI tools, and command line workflows. Currently working as Senior Technical Marketing Specialist at Automox. --- ## Content by Category ### Development - [PDFs Are Feedback Traps](https://landonmiles.com/blog/pdfcomments-app) - [Terminal Velocity: Accelerate Your Workflow with the Command Line](https://landonmiles.com/blog/terminal-velocity) - [Building Static Sites with AWS S3 and CloudFront](https://landonmiles.com/blog/static-sites-and-aws) ### Technical Marketing - [Avoiding Hype: How to Write Honestly About Complex Products](https://landonmiles.com/blog/avoiding-hype) - [The Key to Technical Marketing: Find Your Lego Level](https://landonmiles.com/blog/find-your-lego-level) --- ## Content by Tag ### aws - [Building Static Sites with AWS S3 and CloudFront](https://landonmiles.com/blog/static-sites-and-aws) ### cli - [Terminal Velocity: Accelerate Your Workflow with the Command Line](https://landonmiles.com/blog/terminal-velocity) ### communication - [Avoiding Hype: How to Write Honestly About Complex Products](https://landonmiles.com/blog/avoiding-hype) ### content-strategy - [The Key to Technical Marketing: Find Your Lego Level](https://landonmiles.com/blog/find-your-lego-level) ### deployment - [Building Static Sites with AWS S3 and CloudFront](https://landonmiles.com/blog/static-sites-and-aws) ### pdf - [PDFs Are Feedback Traps](https://landonmiles.com/blog/pdfcomments-app) ### product-marketing - [Avoiding Hype: How to Write Honestly About Complex Products](https://landonmiles.com/blog/avoiding-hype) - [The Key to Technical Marketing: Find Your Lego Level](https://landonmiles.com/blog/find-your-lego-level) ### productivity - [Terminal Velocity: Accelerate Your Workflow with the Command Line](https://landonmiles.com/blog/terminal-velocity) ### side-project - [PDFs Are Feedback Traps](https://landonmiles.com/blog/pdfcomments-app) ### static-sites - [Building Static Sites with AWS S3 and CloudFront](https://landonmiles.com/blog/static-sites-and-aws) ### technical-marketing - [Avoiding Hype: How to Write Honestly About Complex Products](https://landonmiles.com/blog/avoiding-hype) - [The Key to Technical Marketing: Find Your Lego Level](https://landonmiles.com/blog/find-your-lego-level) ### terminal - [Terminal Velocity: Accelerate Your Workflow with the Command Line](https://landonmiles.com/blog/terminal-velocity) ### tools - [PDFs Are Feedback Traps](https://landonmiles.com/blog/pdfcomments-app) ### workflow - [PDFs Are Feedback Traps](https://landonmiles.com/blog/pdfcomments-app) - [Terminal Velocity: Accelerate Your Workflow with the Command Line](https://landonmiles.com/blog/terminal-velocity) --- ## All Posts ## PDFs Are Feedback Traps - URL: https://landonmiles.com/blog/pdfcomments-app - Date: 2026-01-25 - Category: Development - Tags: tools, pdf, workflow, side-project ## Extraction Tax You know the drill. You generate a PDF: an architectural diagram, a draft blog post, a technical spec. You send it to stakeholders for review. They do exactly what you asked: they mark it up with comments. The creation part is solved. The commenting part is solved. But the **extraction** part is a tax on your sanity. If you have 40+ bubbles of feedback, your afternoon looks like this: 1. Open the PDF on monitor one. 2. Open your ticket tracker or Markdown file on monitor two. 3. **Click comment. Copy text. Alt-Tab. Paste. Alt-Tab. Repeat.** It's manual, error-prone, and a waste of time. You miss comments. You miscopy context. The feedback remains trapped in a proprietary layer on top of your document, completely divorced from your actual workflow. I didn't want a "better PDF viewer." I wanted a parser that would strip-mine a document for tasks. --- ## Why Existing Tools Fail Before building, I looked for existing tools. The options were terrible. * **Adobe Acrobat:** It can export comments to FDF or XFDF, proprietary formats that no one actually wants to read. Exporting to Word or RTF results in a formatting nightmare. * **Online Converters:** Most "PDF to Text" tools ignore annotations entirely. The ones that don't usually require you to upload sensitive documents to a mysterious server. * **Python Scripts:** You can use `PyPDF2`, but it requires a dev environment and custom logic for every document structure. The gap was clear: drag, drop, copy, done – a utility that runs entirely in the browser. --- ## Local-First by Design **Stack:** Next.js 14 (static export) :: TypeScript :: Tailwind :: pdfjs-dist. ### Privacy is Portability PDFs often contain sensitive data: contracts, internal memos, unreleased specs. By building this as a client-side app using `pdfjs-dist`, the file never leaves your browser. There is no server-side processing. You could disconnect your internet and the extraction would still work. This isn't just a privacy feature; it’s a speed feature. ### Reconstructing Context This was the technical hurdle. When you highlight text in a PDF, the file doesn't store the words. It stores coordinates: "User drew a yellow rectangle at `[x, y]`." To get your data back, the tool has to perform a geometric intersection: 1. **Extract Geometry:** Get the quad points (corners) of every highlight. 2. **Map the Text:** Parse the page to get the bounding box of every text item. 3. **Intersect:** Run a collision detection loop. If a text item overlaps with the highlight, it belongs to that comment. 4. **Sort:** PDF text isn't always stored in reading order. The tool sorts items by Y then X coordinates to reconstruct the sentence naturally. This turns abstract coordinates back into the human language you actually need. --- ## Zero Dropped Packets Two export paths: | Action | Use Case | | :--- | :--- | | **Copy Checklist** | Paste into Google Docs (formatted list) or GitHub/Notion (GFM checkboxes). | | **Copy / Download Markdown** | Deep work. Includes page numbers and full context for AI agents or your favorite markdown editor. | The new loop takes seconds: receive the PDF, drop it into **pdfcomments.app**, and paste the checklist into your Google Doc. It saves about 15 minutes of mindless copying per document. More importantly, it ensures every comment becomes a tickable box. --- ## Try It **[pdfcomments.app](https://pdfcomments.app)** I built this on a snowy Saturday in about the runtime of the new Tron movie. It works for my use case – your mileage may vary. Free and private. [View on GitHub](https://github.com/jlmiles4/pdfcomments.app). --- ## Terminal Velocity: Accelerate Your Workflow with the Command Line - URL: https://landonmiles.com/blog/terminal-velocity - Date: 2025-12-05 - Category: Development - Tags: terminal, cli, workflow, productivity ## GUIs Are Training Wheels Graphical User Interfaces (GUIs) are polite. They hide the mess. They abstract away the complexity of the file system, the network, and the OS into neat little icons. But if you want to build things – really build things – you need to take the training wheels off. The terminal isn't just a retro way to interact with your computer. It is the closest you can get to the machine's actual model of reality. When you click a button, you are limited by what the UI designer thought you might want to do. When you type a command, you are limited only by your understanding of the system. Command-line fluency is the leverage point for builders. It is the difference between being a user of software and a master of it. --- ## A Brief History of the Black Box The term "terminal" itself is a relic. Back in the day, a **terminal** was a physical piece of hardware – essentially just a screen and a keyboard – connected by cable to a powerful central computer (a mainframe). This terminal had no processing power; it was just an input/output device. As personal computers became powerful enough to run their own programs, we still needed a way to interact with the underlying operating system in that same text-based, command-driven style. That's where the **terminal emulator** comes in. It's a software program (like iTerm2, Alacritty, Windows Terminal, or Ghostty) that *mimics* the behavior of those old physical terminals. Inside that terminal emulator, you run a **shell** (like Bash, Zsh, or Fish). The shell is the program that interprets the commands you type and sends them to the operating system. So, to be clear: * **Terminal Emulator:** The window/application you type into. * **Shell:** The program that interprets your commands. You can use any shell inside any terminal emulator. The customization you do affects your shell, not just the window. --- ## Why the Terminal Still Matters It’s easy to dismiss the command line as nostalgia or gatekeeping. It’s neither. It is about **portability** and **composability**. ### 1. The Skill That Travels A button in VS Code might move next week. The settings menu in Windows looks nothing like macOS. But `ls`, `cd`, `grep`, and `ssh`? Those work on your laptop, on your home server, and on cloud instances. When you learn the terminal, you aren't learning a tool; you are learning an ecosystem that spans decades and platforms. ### 2. Composability Beats Features GUIs give you features. The terminal gives you building blocks. If you need to find every text file containing the word "error" and move it to a debug folder, a GUI makes that a manual drag-and-drop nightmare. In the terminal, it’s one line: ```bash grep -l "error" *.txt | xargs -I {} mv {} ./debug/ ``` This pipe operator (`|`) is the most powerful concept in computing. It allows you to chain simple tools together to solve complex, unforeseen problems. ### 3. The Production Reality Real servers don't have monitors. If you work in DevOps, cloud infrastructure, or backend engineering, you will eventually be staring at a black screen with a blinking cursor. You can't RDP into a Lambda function. --- ## What You Actually Learn Using the terminal forces you to learn the mental models of the computer. * **Permissions:** You stop guessing why a file won't save and start understanding `chmod`, `chown`, and user groups. * **Processes:** You learn that applications aren't magic windows; they are PIDs that consume resources and can be sent signals (`kill`, `hup`). * **Networking:** You stop checking wifi bars and start checking connectivity with `ping`, `curl`, and `dig`. The terminal removes the abstraction layer. It shows you exactly what is happening, even when it’s ugly. --- ## Building Fluency Deliberately You don't need to uninstall your desktop environment today. Mastery comes from deliberate practice, not suffering. ### 1. Replace One Workflow Pick one thing you do with a mouse and learn to do it with a keyboard. * **Git:** Stop using the Source Control tab. Learn `git status`, `git add -p`, and `git commit`. * **Navigation:** Stop clicking through Finder or Explorer. Use `cd` and `ls`. ### 2. Learn the Core, Ignore the Rest You don't need to memorize `man` pages. You need the daily drivers: * **Navigation:** `cd`, `pwd`, `ls` * **File Ops:** `cp`, `mv`, `rm`, `mkdir`, `touch` * **Reading:** `cat`, `less`, `head`, `tail` * **Search:** `grep`, `find` ### 3. Customize Your Environment The default terminal often leaves a lot to be desired. Make it yours. Install developer fonts for improved readability, set up a comfortable color scheme, and customize the interface to suit your workflow. * Install a modern shell like `zsh` or `fish`. * Set up a prompt that gives you context (current directory, git branch, error status). * Create aliases for commands you type often (`gs` for `git status`). --- ## A Note on Platforms and Servers Before diving into the commands, it's important to note: **The terminal is the standard interface for the internet's backend.** Almost every server you will ever interact with runs Linux. When you SSH into an EC2 instance, a DigitalOcean droplet, or even a Raspberry Pi, you are dropping into a shell. There is no mouse. There is no "File -> Open." There is only the prompt. The good news is that the commands below are nearly universal. They work natively on **Linux** and **macOS** (which is Unix-based). On **Windows**, they work perfectly within WSL (Windows Subsystem for Linux) or Git Bash. Learn them once, use them everywhere. --- ## The Compound Interest of CLI Skills The hardest part of learning the terminal is the first two weeks. It feels slow. You will feel stupid looking up how to rename a folder. But unlike GUI knowledge, which depreciates every time an interface updates, terminal knowledge compounds. The regex you learn today helps you grep logs next month. The SSH config you set up for your home lab helps you debug a production outage next year. The shell script you write to automate a backup becomes the foundation of your CI/CD pipeline. Embrace the discomfort. The terminal is the lever that moves the world. --- ## The Builder's Cheat Sheet Here is a quick reference for the commands you will use 90% of the time. ### Navigation | Command | Description | Example | | :------ | :---------- | :------ | | `pwd` | Print Working Directory (show where you are) | `pwd` | | `ls` | List directory contents | `ls -l` (long format), `ls -a` (all files) | | `cd` | Change Directory | `cd ~` (home), `cd ..` (parent) | ### File & Directory Management | Command | Description | Example | | :------ | :---------- | :------ | | `mkdir` | Make Directory | `mkdir new_project` | | `touch` | Create empty file / update timestamp | `touch new_file.txt` | | `cp` | Copy files or directories | `cp file.txt backup/` | | `mv` | Move/Rename files or directories | `mv old.txt new.txt` | | `rm` | Remove files | `rm file.txt`, `rm -rf dir` (careful!) | | `cat` | Display file content | `cat log.txt` | | `less` | View file content interactively | `less large_log.txt` | | `head` | Display first 10 lines | `head -n 10 file.txt` | | `tail` | Display last 10 lines | `tail -f log.txt` (follow updates) | ### Text Processing & Search | Command | Description | Example | | :------ | :---------- | :------ | | `grep` | Search for patterns in files | `grep "ERROR" app.log` | | `find` | Search for files | `find . -name "*.js"` | | `sort` | Sort lines of text | `cat names.txt | sort` | | `uniq` | Filter duplicate lines | `sort list.txt | uniq` | ### System & Networking | Command | Description | Example | | :------ | :---------- | :------ | | `ps` | Snapshot of current processes | `ps aux` | | `top` | Live process monitor | `top` | | `htop` | Interactive process viewer (often needs install) | `htop` | | `kill` | Terminate a process | `kill 12345` (PID) | | `ping` | Check network connectivity | `ping google.com` | | `curl` | Transfer data from/to a server | `curl -O https://site.com/file.zip` | | `ssh` | Secure Shell (remote login) | `ssh user@server.com` | --- ## Avoiding Hype: How to Write Honestly About Complex Products - URL: https://landonmiles.com/blog/avoiding-hype - Date: 2025-12-02 - Category: Technical Marketing - Tags: technical-marketing, product-marketing, communication ## The Default Setting is Distrust "Next-generation, AI-powered, enterprise-grade platform for synergy." If you work in tech, you've read that sentence a thousand times. And if you're like most technical buyers, you stopped reading immediately after. When a product description is that dense with superlatives, it usually means one thing: **we can't explain what it actually does.** There is a natural pressure in marketing to go big. Founders want to disrupt industries. Sales teams want a silver bullet. But when you are selling to engineers, developers, or IT pros, hype is not just ineffective – it's a liability. Technical buyers default to distrust. They are trained to find edge cases, identify failure points, and verify claims. When you lead with "revolutionary," they immediately ask "how?" When you say "seamless," they ask "what about my legacy dependencies?" Precision, context, and evidence beat superlatives every time. Honesty isn't a concession you make to legal; it's a competitive advantage. --- ## Spotting the Hype Patterns You can't fix what you don't see. Hype usually hides in three specific patterns: 1. **Vague Superlatives:** Words like "best-in-class," "unparalleled," or "revolutionary." They take up space without adding information. 2. **Buzzword Stacking:** "AI-driven blockchain synergy." It sounds expensive but means nothing. 3. **Cherry-Picked Metrics:** "10x faster" (but only on a specific, unlisted hardware configuration running a 'Hello World' script). ### The Translation Layer Here is what your audience hears when you use hype: | **The Claim** | **What the Engineer Hears** | | :--- | :--- | | "Seamless integration" | "We haven't documented the API yet." | | "Zero-configuration" | "Good luck customizing this when it breaks." | | "Single pane of glass" | "An iframe that loads 4 different dashboards slowly." | | "Unlimited scale" | "We haven't tested this past 1,000 users." | --- ## Why Hype Fails Technical Buyers Engineers verify claims. It's their job. If you promise "instant deployment" and it takes 4 hours to configure the IAM roles, you haven't just annoyed a user – you've lost credibility for the entire product. Overpromising creates three distinct problems: 1. **Reputational Debt:** Once an engineer marks you as "marketing fluff," it takes years to undo that damage. They will assume your documentation is equally unreliable. 2. **The Wrong Conversions:** Hype attracts people looking for magic. When your product turns out to be software (which has constraints), they churn. 3. **Support Load:** Users who buy the dream file tickets when reality sets in. "You said it was instant" becomes a P1 ticket for your support team. Honesty filters out the wrong prospects early and builds trust with the right ones. --- ## A Blueprint for Honest Communication Writing honestly doesn't mean writing boring copy. It means trading adjectives for specifics. ### 1. Define the Problem, Then the Solution Don't start with the solution ("Our AI tool"). Start with the pain ("Manually parsing 10,000 log lines is hell"). When you accurately describe the user's problem, they trust that you understand the solution. ### 2. Anchor Benefits with Context Never just say "fast." Say "processes 1GB of data in 400ms on a standard t3.medium instance." Context turns a claim into a fact. ### 3. State Limitations Plainly This is the hardest one for leadership to swallow, but the most effective for buyers. * "This works best for teams managing < 500 endpoints." * "Currently supports AWS and Azure; GCP is on the roadmap." When you admit what your product *doesn't* do, users instantly believe you about what it *does* do. ### 4. Show Your Work Link to the methodology. Show the benchmark code. Provide a sandbox. If your product is good, the proof is your best marketing asset. --- ## Handling Internal Pressure You will inevitably face pressure to "jazz it up" from stakeholders who are afraid the truth isn't exciting enough. **Reframe the request.** When a founder asks for "more punch," give them "more proof." * Instead of adding "revolutionary," add a customer quote about a 50% time reduction. * Instead of "blazing fast," add a performance graph comparing you to the status quo. **Use honest competitors.** Point to tools like Stripe, Vercel, or Tailscale. Their marketing is often dry, technical, and incredibly effective. They win because they respect the user's intelligence. **Position the "Not Yet."** If a feature is missing, don't fake it. Sell the vision of where you are going, but clearly label it as **Roadmap**. Engineers understand development cycles; they don't understand being lied to. --- ## Operationalizing Honesty Honesty isn't a one-time edit; it's a process. * **The Specificity Check:** Before publishing, highlight every adjective. Can you replace it with a number or a noun? * **The Engineering Review:** Don't just ask engineers to check for typos. Ask them: "Does this sound true? Would you believe this?" * **Internal Guardrails:** Create a style guide that explicitly bans certain buzzwords unless they are technically accurate in context. If a claim can't be verified, rewrite it or remove it. Honesty narrows your audience, and that is the point. You don't want everyone; you want the users who will actually succeed with your product. By respecting their time and intelligence, you earn the right to be part of their stack. --- ## The Key to Technical Marketing: Find Your Lego Level - URL: https://landonmiles.com/blog/find-your-lego-level - Date: 2025-11-30 - Category: Technical Marketing - Tags: technical-marketing, content-strategy, product-marketing ## The Challenge of Technical Marketing There's a natural tension in selling technical products. R&D and marketing excel in different domains: one lives in technical depth, the other in broad communication. Both are essential. There would be no product without R&D, and no one would understand or discover it without marketing. But because each team operates from different instincts, their strengths can pull in different directions. R&D gravitates toward architecture, internals, and precision. Marketing leans on clarity, reach, and storytelling. The problem is that these perspectives don’t automatically align. Deep technical explanations can overwhelm most audiences, while high-level messaging can feel incomplete without grounding details. The result is a gap between what the product _is_ and how customers understand what it can _do_ for them. That’s where technical marketing comes in. Your role is to **bridge the gap** – to translate technical depth into clear explanations of how the product solves real customer problems. You’re looking for the connection between what engineers built and the situations customers face, then communicating those solutions in a way that resonates. Doing that well means finding your **Lego Level**. --- ## The Lego Level Mindset ### 1. Focus on Specific Application and Outcomes Think about how Legos are sold. They aren’t marketed as boxes of plastic. They’re shown as race cars, spaceships, and castles. The value is in demonstrating what you can build and how the pieces fit together. Legos are intuitive because each piece exists to do something specific. Instructions don’t just describe a brick; they show where it goes and what it helps you build. Your documentation and marketing should work the same way. Don’t stop at naming features or components. Show what they’re for and how to use them to solve a real problem. Customers care about the outcome, not the internal machinery. Make the result obvious, then lay out the smallest set of steps required to reach it. That’s the difference between content that tells you what a brick _is_ and content that shows you _where it goes_. Features only matter when they're tied to concrete use cases. When you focus on the outcome, users gain confidence that the pieces you provide are the right ones for the job they're trying to do. ### 2. Abstract Away the "Injection Molding" Process When you build a Lego set, you don’t want a primer on the chemical makeup of the bricks or the temperature of the molding machines. Those details may be technically correct, but they’re irrelevant to the problem you’re trying to solve: building the car, spaceship, or castle. The bricks simply work. Pieces fit. Colors match. That reliability is expected – not a selling point. Your product is the same. Stability, sane performance, working authentication, and basic compatibility are **table stakes**. If these are present, users barely notice them. They just get on with their build. Technical marketing often gets stuck explaining the “injection molding.” We dig into backend architecture, component choices, or code complexity – details that matter to the people building the product, but not to the people using it. To find your Lego Level, **edit strictly**. Ask: _Does the customer need to know this to snap the pieces together?_ If not, you’re explaining chemistry when you should be showing a solution. People don't buy Legos because they want plastic bricks. They buy them for what the bricks can become. ### 3. Match Your Instructions to the Builder To solve your customer’s problem, you have to understand two things: **how complex your product is and how your users like to build.** Sometimes your product is a 9,000-piece Death Star. It’s huge, complex, and intimidating. Those users want detailed, step-by-step instructions. If you dump all the pieces on the table without a clear build path, they’ll walk away. In cases like this, the Lego Level looks like a blueprint: clear sequences, expected outcomes, specific screens, commands, and configurations that carry them from unboxing to finished build. You do this without talking down to them or reteaching fundamentals they already know. Other times, users just want to play with the Legos. They don’t need a full walkthrough. They want to know what pieces exist and how they can interconnect. For them, the Lego Level is a well-organized parts map: clean docs, schemas, modules, endpoints, and a handful of recipes that show what’s possible. Once they know what’s in the box and how it fits together, the best thing you can do is get out of the way and let them build. --- ## Putting the Lego Level Into Practice Using your Lego Level isn't a one-time exercise. It's a habit you build into every asset you create. A few practical approaches: - **Start from the finished build.** Before you write anything, decide what "castle" or "spaceship" you're showing. Choose a real problem, realistic environment, and clear outcome. - **Name the pieces in plain language.** Translate internal project terms into labels users naturally understand. If a feature name sounds like an internal codename, it probably is. - **Show how the pieces connect.** Use diagrams, short walkthroughs, and code samples that mirror real workflows. Think in flows, not isolated features. - **Cut the injection-molding details.** Save deep internals for reference docs and architecture papers. In primary solution content, focus on what to do, when to do it, and what happens as a result. - **Adapt to different builders.** Beginners may need step-by-step guides. Experienced users may want API references, quickstart repos, and high-level system diagrams. Both are valid – they just sit at different Lego Levels. **Technical marketing isn’t about describing bricks. It’s about showing what can be built.** --- --- ## Building Static Sites with AWS S3 and CloudFront - URL: https://landonmiles.com/blog/static-sites-and-aws - Date: 2025-11-29 - Category: Development - Tags: aws, static-sites, deployment ## You already know how to build. Let's talk about deploying it right. If you're a developer, running your own static site is one of the easiest ways to get something online while staying in your comfort zone: building, writing, and shipping. The tricky part is deploying that site in a way that’s fast, reliable, secure, and inexpensive. Most developers do not want to manage servers for a small personal site, and many hosted options feel either too limited or too opaque. Sure, you could use Squarespace, Wix, GitHub Pages, and plenty of others. But I prefer **AWS S3 and CloudFront** for the flexibility, cost, and control. Now hear me out – AWS can feel complicated at first. But Amazon S3 paired with CloudFront solves this cleanly. You get global performance, strong reliability, and a setup that usually costs just a few dollars per month. You stay in code-land instead of wrestling with drag-and-drop editors. And while cheaper or one-click platforms exist, the AWS workflow gives you practical, transferable experience that pays off as your projects grow. This guide walks through the entire workflow, no matter which static site generator you use, and gives you clear steps to go from zero to live. --- ## Why Static Sites – and Why CloudFront + S3? Static sites pair extremely well with AWS because the entire workflow is based on one simple output: a folder of static files. No matter which generator you use, the end result is the same – HTML, CSS, JavaScript, and assets that can be dropped directly into S3 and served globally through CloudFront. Most tools output this in a predictable directory: - Next.js (static export) → `out/` - Hugo → `public/` - Astro → `dist/` - Jekyll → `_site/` - Eleventy → `_site/` - Angular (static build) → `dist/` AWS does not know or care which tool produced them. The hosting process is completely **generator agnostic**. ### Advantages of static sites - **Fast page loads**: low latency and global caching. - **Minimal infrastructure**: no servers, databases, or runtime environment to manage. - **Low hosting cost**: pay for storage and bandwidth, which is cheap for personal sites. - **Simple, predictable deployments**: a single `sync` and `invalidate` command. - **Ideal for version control**: the entire site is deployable from your git repository. - **No backend to maintain**: less security patching and fewer sleepless nights. Static sites are perfect for blogs, documentation, personal sites, marketing pages, and lightweight product pages. Most importantly, static sites keep you focused on the code and content rather than backend infrastructure. You write your pages, run a build, and deploy a folder. That is the entire workflow. Your energy stays where it belongs – designing, writing, and building. Some generators emphasize content (Hugo, Jekyll, Eleventy). Others emphasize components or application structure (Next.js, Astro, Angular). But the AWS deployment workflow stays the same – build locally, upload to S3, and let CloudFront handle global delivery. --- ## Getting Started with AWS The first steps are all about security and setup. We'll harden your AWS account before we even touch S3. --- ### Step 1: Build Your Site Locally Start by building your site using whatever tool you are most comfortable with. All modern static site generators produce a directory of output files that can go straight into S3. Keep your workflow straightforward: - Write your content - Build locally - Verify everything looks right - Prepare the output folder for deployment Once your site builds cleanly, you are ready for the AWS part. --- ### Step 2: Sign Up for AWS and Create a Secure SSO Admin If you do not already have an AWS account, create one and sign in. This first login uses your root account, which has unrestricted access to everything in AWS. #### 1. Harden the Root Account The root account is too powerful for day-to-day work, and using it regularly increases the risk of accidental or malicious changes. - Store the root email address and password in a reputable password manager (e.g., 1Password). - Enable **MFA** on the root account (a hardware security key or an authenticator app is preferred). - Save any recovery codes or MFA backup methods in a secure place. This root account is for **getting in, getting out, and getting on with life.** You will rarely use it after this, and you shouldn't. If something goes wrong, you'll need it. Otherwise, you'll do everything from the admin account we set up below. #### 2. Change the Default Region Before creating anything, set your default region. Choose whatever is physically closest to you – as long as it is **not** `us-east-1`. Friends do not let friends deploy in us-east-1. That region absorbs a huge share of AWS traffic and is known for occasional quirks. From a reliability and operational-sanity standpoint, you are usually better off in a less congested region. There are a few reasons to deploy there, but for a personal site, just choose something else. #### 3. Enable IAM Identity Center (SSO) AWS best practice is to avoid using the root account for everyday tasks – IAM Identity Center will become your primary login method going forward. - In the AWS console, use the search bar at the top to find **IAM Identity Center**. - Open it and enable it if prompted. #### 4. Create Permission Sets - In IAM Identity Center, go to **Permission sets**. - Choose **Predefined permission sets** and add **`AdministratorAccess`**. - Create a second permission set using the predefined **`Billing`** permissions. If you're a solo developer, combining an administrative role with a separate billing-permission set is usually the simplest option. In a larger team, you'll want to create more granular roles so not everyone has full admin or billing access. #### 5. Create Your SSO User - Still in IAM Identity Center, click **Users** then **Add user**. - Enter a username and an email address. (You can reuse the same email as the root account or use a different one – the root login remains separate). - Assign the new user the **AdministratorAccess** permission set. - Optionally (and recommended), assign the **Billing** permission set to this user as well so you can manage billing without using the root account. - Complete the setup and verify the email to activate your SSO user. #### 6. Save Your SSO Start URL and Stop Using Root - After setup, IAM Identity Center will display a **user portal / start URL**. Save this URL in your password manager and bookmarks. - Sign out of the root account. - Sign back in using the new SSO user you just created – this will be your primary way to access AWS from now on. --- ### Step 3: Create Your S3 Bucket Once you're logged in to your SSO admin account, search for **S3** in the AWS console. Click **Create bucket** and give it a name. This bucket will store your static build files. Most defaults are correct, but double-check these: - **Block all public access**: keep this **enabled** - **Server-side encryption**: choose **SSE-S3** - **Bucket key**: enable it - **Object lock**: disable it Static sites should always be served privately through CloudFront, not directly from S3. After the bucket is created, open it and use **Create folder** to organize the files for your site. This is optional but helps keep things tidy if you ever host multiple projects. --- ### Step 4: Create Your CloudFront Distribution CloudFront is the global CDN layer that makes your site fast everywhere, not just near the S3 region. Search for **CloudFront** in the AWS console. Click **Create distribution** and configure: - **Pricing class**: the free tier is fine, but pay-as-you-go is generally inexpensive. For most personal blogs, **pay-as-you-go ends up cheaper than AWS’s monthly pricing packages**. - **Name / description**: anything you want - **Application type**: **Single-page web app** is fine for almost all static sites Click **Next** when prompted for a domain name. We will add a custom domain later. #### Configure Your Origin - **Origin type:** Amazon S3 - **Bucket:** select your S3 bucket using **Browse** - **Origin path:** your folder name, starting with `/your-folder` (if you created one in S3) - **Access:** enable **Allow private S3 bucket access to CloudFront** - This step automatically creates an Origin Access Control (OAC) and applies the required bucket policy so **only CloudFront** can read from your bucket. Use the recommended origin and cache settings – they are tuned for static assets. #### Security Settings Enable: - **Security protections** - **Layer 7 DDoS attack protection** (optional – this adds $30/month to your bill and is more relevant for large distributed applications than personal blogs. You can leave this unchecked.) These give you AWS Shield-level protection by default. Click **Next**, review the settings, and **Create distribution**. CloudFront will take a few minutes to deploy. --- ### Step 5: Set Default Root Object and Enable Bot Protection Open your new CloudFront distribution. #### Set the Default Root Object Under General, click Edit. Scroll to Default root object and enter: `index.html` This ensures that hitting `/` loads your homepage correctly. #### Bot Protection Click the **Security** tab and enable Bot Protection. CloudFront groups automated traffic into categories. A strong starting point is: | **Bot Category** | **Recommended Action** | **Why** | | ---------------------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------- | | **Unverified bot action** | **Block** | High-risk automation with no legitimate purpose. | | **Signal: Bot data center** | **Block** | Known bot-heavy networks that frequently generate abusive traffic. | | **Signal: Automated browser** | **CAPTCHA** | Often scraping or scripted browsing – challenge to reduce noise without blocking legitimate tools. | | **Signal: Non-browser user agent** | **CAPTCHA** | Typically automation or scraping tools impersonating browsers. | Everything else can stay in **Monitor** mode. Watch your traffic and tighten categories individually when needed. We will enable logging, custom error pages, and additional hardening after the site is live. --- ### Step 6: Set Up the AWS CLI and Deploy Your Build We’re going to use the AWS CLI because it’s universal, scriptable, and works with every site generator. #### Install the AWS CLI - **macOS:** `brew install awscli` - **Linux:** `sudo apt install awscli` (or use your distro’s package manager) - **Windows:** Start with installing Linux. But if you’re sticking with Windows, download and run the official AWS CLI MSI installer. #### Configure SSO in the CLI Once it is installed, open your terminal and run: `aws configure sso` You will see prompts like these: | **Prompt** | **Explanation** | | --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | | SSO session name (Recommended) | Put in a name for this account. This one doesn't matter too much if you only are logging in to this AWS instance. | | SSO start URL | This is the URL you got when you set up your SSO user. Typically something like `https://d-long-string.awsapps.com/start` | | SSO region | The region you chose in step 2. | | SSO registration scopes [sso:account:access]: | The default value is correct, so push enter. | Press enter, and your web browser will open and authenticate your session. You'll be asked a couple more questions: | **Prompt** | **Explanation** | | ------------------------- | --------------------------------------------------------------------------------------------------------------------- | | CLI default client Region | I typically just use the same region as above. | | CLI default output format | `None` is fine, just push enter. | | CLI profile name | If this is the only AWS account you'll be working in, type `default` and it makes the login commands a little easier. | #### Important CLI Commands Your SSO session will expire periodically. When it does, run the below commands before deploying or running other AWS CLI commands. - If you named the profile `default`: `aws sso login` - To log in and refresh your SSO session for a given profile: `aws sso login --profile your-profile-name` #### Deploying Your Build #### 1. Authenticate Make sure you are authenticated: `aws sso login` (If you used a non-default profile, add `--profile your-profile-name`.) #### 2. Push to S3 From the directory where your build folder lives, run the **sync** command. This command is fast because it only uploads new or modified files and removes files from S3 that are no longer in your local folder (due to the `--delete` flag): `aws s3 sync ./out s3://your-bucket/your-folder --delete` Adjust the folder name (`./out`) to match your generator's output. If you want to get fancy, you can also create a wrapper around this command – for example, `pnpm deploy:s3`. #### 3. Invalidate CloudFront Invalidating CloudFront is how you tell it to look for new files. Invalidation forces CloudFront to refresh its global cache so users see the latest version of your site. It doesn't delete anything, it just tells CloudFront to go check for new files in your S3 bucket. Here is the command to invalidate everything: ```bash title="Invalidate CloudFront Cache" aws cloudfront create-invalidation \ --distribution-id YOUR_DISTRIBUTION_ID \ --paths "/*" ``` Replace `YOUR_DISTRIBUTION_ID` with the ID from your CloudFront distribution page. It can be found at the top of the page or in the URL. --- ### Step 7: Test Your Site CloudFront will take a few minutes to propagate globally. Open the built-in CloudFront domain (e.g.: `abcdefg1234567.cloudfront.net`) and verify that your pages render correctly. If you type `/about` and it loads `/about.html`, then your static generator already handles folder-style indexing. If not, you will want **pretty URLs**. --- ### Step 8: Add a CloudFront Function for Pretty URLs Static sites often generate files like: `/about.html` and `/contact.html` But most people expect: `/about` and `/contact` S3 does not automatically map `/about` to `/about.html`. CloudFront can do this for you using something called a **CloudFront Function**. #### What is a CloudFront Function? CloudFront Functions are small pieces of JavaScript that run on CloudFront’s global edge network _before_ the request reaches your S3 bucket. They can: - Rewrite URLs - Add or remove headers - Redirect pages - Block automated traffic They run extremely fast, cost almost nothing, and do not require any backend infrastructure. They are ideal for simple logic like URL rewriting. For heavier logic or accessing the response body, AWS also provides Lambda@Edge, but you won’t need that for static sites. #### How to Create the Pretty URL Rewrite Function #### 1. Open CloudFront Functions In the AWS console search bar, type CloudFront and open it. From the left menu, select Functions. #### 2. Create the Function - Click **Create function** - Name it something descriptive like `pretty-urls` - Choose **CloudFront Function** - Click **Create function** #### 3. Add the Rewrite Logic Paste this code into the editor: ```javascript title="CloudFront Function - Pretty URLs" function handler(event) { var request = event.request; var uri = request.uri; // If the request already includes a file extension, leave it alone if (uri.includes('.')) { return request; } // If it ends with a slash, CloudFront will automatically try index.html if (uri.endsWith('/')) { return request; } // Otherwise append .html so /about becomes /about.html request.uri = uri + '.html'; return request; } ``` Click **Save**, then **Publish**. #### Attach the Function to Your Distribution 1. Open your CloudFront distribution 2. Go to the **Behaviors** tab 3. Edit the default behavior 4. Scroll to **Function associations** 5. Under **Viewer Request**, select your `pretty-urls` function 6. Save the changes Once deployed, CloudFront will rewrite clean URLs automatically. If a user visits: `https://your-site.cloudfront.net/about` CloudFront will serve: `/about.html` Your users never see the rewrite – they just get the clean URL. #### How Static Generators Differ Some generators output nested index pages: `/about/index.html` and `/blog/index.html`. CloudFront automatically maps `/about` and `/about/` to the correct file. Others output flat files: `/about.html` and `/blog.html`. For these, the CloudFront Function handles the rewrite. Either way, CloudFront ensures your URLs look clean and modern. --- ### Step 9: Turn On Logging Once everything works, enable CloudFront logging. To stay privacy-friendly: - Anonymize IPs - Avoid logging cookies - Store logs in a dedicated S3 bucket Logging costs pennies and gives you insight into performance and traffic patterns. --- ### Step 10: Add Your Domain Buy a domain (Route 53, Cloudflare, Namecheap, etc.), add it to your CloudFront distribution, and issue a certificate through **AWS Certificate Manager**. Certificates are free and auto-renew when used with CloudFront. Update your DNS records to point your domain to CloudFront. --- ### Step 11: Deploy Changes and Invalidate the Cache To update your site: 1. Rebuild locally 2. Upload new files to S3 3. Invalidate CloudFront’s cache Invalidation forces CloudFront to refresh its global cache so users see the latest version of your site. Here is the command to invalidate everything: ```bash aws cloudfront create-invalidation \ --distribution-id YOUR_DISTRIBUTION_ID \ --paths "/*" ``` Replace `YOUR_DISTRIBUTION_ID` with the ID from your CloudFront distribution page. --- ## Quick Reference: Commands You’ll Use Regularly A compact list of core commands you'll run as part of your normal workflow. You can also create pnpm or other wrapper commands around these to make your life easier. ```bash title="AWS SSO Login" aws sso login ``` ```bash title="Upload to S3" aws s3 sync ./dist s3://your-bucket/your-folder --delete ``` ```bash title="Invalidate CloudFront Cache" aws cloudfront create-invalidation \ --distribution-id YOUR_DISTRIBUTION_ID \ --paths "/*" ``` ```bash title="List CloudFront Distributions" aws cloudfront list-distributions ``` **List objects in your S3 deployment folder:** ```bash title="List S3 Bucket Files"" aws s3 ls s3://your-bucket/your-folder/ ``` Having these in one place makes updating your site as simple as rebuilding, syncing, and invalidating. --- ## Go Forth and Stay Static Running a static site on S3 and CloudFront gives you an ideal balance of performance, cost, and control. Everything stays fast globally, deployments are predictable, and you never have to think about maintaining servers or patching application infrastructure. You spend your time building your site, writing content, and refining your design – not worrying about uptime, scaling, or backend logic. Beyond the basics, you can layer on additional capabilities at your own pace: - Analytics via CloudFront logs or privacy-friendly tools - Lightweight forms using services like Formspree or custom API routes - Deeper security using WAF or custom CloudFront Functions - CI/CD pipelines for automatic deployments This workflow scales effortlessly from tiny personal blogs to documentation sites, product pages, and even complex component-driven frontends. Once the basics are in place, you gain a powerful foundation that grows with your projects and helps you build confidence with AWS. The entire setup typically costs a few dollars per month. ---