Project-Specific Claude Code Customisations

I recently attended a workshop at Version 1's London offices run by Phillie Wright and James Lowe from Anthropic. It covered advanced techniques for customising Claude Code with project-specific MCP servers, skills, hooks, and tools. I couldn't wait to put what I'd learnt into practice.

My aim was to apply the learnings to a real project I'm working on: PeerTalk. It's an SDK for logging and networking across modern operating systems and retro Apple Macs. The plan is to build a proper abstraction layer over BSD sockets, MacTCP, OpenTransport, and AppleTalk, with consistent logging across all platforms.

I'd already done all the planning. The plan folder has 10 phases broken down into sessions with implementation tasks. What I needed was the right Claude Code tooling to help me implement those plans efficiently - particularly for the Classic Mac side where you're dealing with obscure APIs, interrupt safety rules, testing on real hardware, and the manual effort of transferring binaries from the host machine to the real Mac over FTP.

#The Setup

I built a complete Claude Code customisation suite for the project: 14 skills, 4 hooks, 4 platform-specific rule files, Python validators and build tools, an MCP server, and an auto-triggered debugging agent. The full details are in CLAUDE-CODE-SETUP.md, but I'll cover the key components here and the rest as I use them to implement the SDK.

#Classic Mac Hardware MCP Server

The MCP server gives Claude Code direct access to my Classic Mac test machines using two protocols: FTP for file operations and TCP for remote execution via LaunchAPPL (from Retro68).

The Macs run RumpusFTP for file access and LaunchAPPL (a TCP server from Retro68 on port 1984) for remote execution. The MCP server gives Claude the power to:

  • Deploy compiled binaries via FTP (.bin and .dsk files)
  • Execute applications remotely via the LaunchAPPL TCP server
  • Fetch application logs via FTP
  • Manage folders and files on the real Mac - Claude can create directories, upload files, download logs, and clean up old builds, all through the FTP connection to the actual Classic Mac hardware

What makes this powerful is that you can say things like "build and deploy the LaunchAPPLServer to the Performa 6200 Mac" and Claude will cross-compile the binary in Docker, then use the MCP server to upload it via FTP to the actual Classic Mac hardware and verify the deployment. The skills I've built (/build, /deploy, /execute, /fetch-logs) orchestrate the MCP server's primitives into higher-level workflows.

Here's an example of asking Claude to copy a .sit archive to the Mac - it figures out how to use the MCP server to do it:

Claude uses the MCP server to copy a .sit archive to the Classic Mac via FTP

Following Anthropic's code execution pattern, it's designed to be used via code rather than direct tool calls. This keeps log files in the execution environment and only passes summaries to the model, saving 98%+ tokens.

View the MCP server

#Setup Skills

Two skills handle Classic Mac hardware setup:

setup-machine registers a new Mac in the machine registry, verifies FTP connectivity, and creates the directory structure.

setup-launcher builds and deploys LaunchAPPLServer to the Mac. It uses the Retro68 toolchain in Docker to cross-compile for the right platform (68k for MacTCP, PPC for OpenTransport) and uploads the binary via FTP. From there, you unpack the binary or mount the .dsk image on the Classic Mac, launch LaunchAPPLServer, and configure it to listen on port 1984 using OpenTransport or MacTCP depending on the platform.

Here's the setup process in action:

Setting up a Classic Mac using the setup-machine and setup-launcher skills

The results on the actual Classic Mac hardware:

Empty FTP folder before setup
Empty FTP folder before setup
FTP folders created by setup-machine skill
Directory structure created by /setup-machine
.sit archive copied to Mac via FTP
.sit archive copied to Mac via MCP server
Binaries deployed to temp folder
Demo apps deployed via FTP
Uncompressed binaries ready to run
Binaries unpacked from .bin files
Dialog demo application running
Dialog demo running
Hello World application running
Hello World demo running

View setup-machine | View setup-launcher

#Implement Skill

This skill handles the full implementation workflow for a session from the phase plans.

When you run /implement 1 1.2, it:

  1. Spawns parallel agents to gather context (session spec, platform rules, dependencies, existing code)
  2. Verifies dependencies are met
  3. Implements each task in order, using /mac-api to verify Classic Mac API details
  4. Runs verification checks (build, tests, ISR safety, quality gates)
  5. Marks the session as DONE in the phase file

The skill handles all the workflow around implementing a session: gathering context, checking dependencies, writing code following the platform rules, running comprehensive verification, and tracking progress. It means I can focus on the plans themselves rather than the mechanics of following them.

Here's the implement skill in action, implementing Phase 0 Session 1, running tests, and suggesting the next session:

Using /implement to build Phase 0 Session 1, run tests, and get the next session suggestion

View the implement skill

#Mac API Skill

When implementing Classic Mac code, you often need to verify function signatures, check if a call is interrupt-safe, or look up error codes. The /mac-api skill searches authoritative Classic Mac reference books - Inside Macintosh, the MacTCP Guide, and Open Transport documentation - and returns exact quotes with line-level citations.

It works in two stages. First, it checks pre-built indices with 741 functions (including interrupt-safety flags), 185 error codes, and key tables like Table B-3 (interrupt-safe routines) and Table C-1 (Open Transport notifier-safe functions). If the information isn't in the index, it does a targeted grep and reads the specific line range from the books.

This means you can ask things like "is OTAllocMem safe at interrupt time?" or "what does error connectionClosing mean?" and get back verified answers from the official Apple documentation, not LLM hallucinations.

Here's the skill in action - asking a question, getting the answer from the documentation, and then adding it to the MacTCP rules:

Using /mac-api to look up API details and update the MacTCP rules

View the mac-api skill

#Hooks

Hooks are shell commands that run automatically in response to events like tool calls. I've set up four hooks that automate quality checks and catch issues before they become problems.

The ISR safety check hook blocks edits before they're written if they introduce interrupt safety violations in callback code. It maintains a database of forbidden function calls (malloc, memcpy, blocking I/O) and checks them against callback patterns in MacTCP, OpenTransport, and AppleTalk code. When it detects a violation, it blocks the edit and suggests safe alternatives.

The quick compile hook runs syntax checks immediately after editing C files. All compilation happens in Docker containers using the right compiler for each platform: m68k-apple-macos-gcc for 68k code, powerpc-apple-macos-gcc for PPC, and standard gcc for POSIX. It gives immediate feedback when you save a file, so you catch syntax errors straight away.

The ADSP userFlags hook warns when code accesses the AppleTalk userFlags field without clearing it. According to the Programming With AppleTalk documentation, failure to clear userFlags causes connection hangs. The hook detects userFlags access patterns and reminds you to add the required clear.

The coverage check hook runs after test commands and verifies code coverage meets the 10% minimum threshold. It looks for coverage data files, parses them with lcov, and warns if coverage falls short with actionable next steps.

These hooks catch bugs before they hit real hardware. The ISR safety hook is particularly important - it's the only blocking hook, because ISR violations can cause hard-to-debug crashes on Classic Macs.

View the hooks

#Cross-Platform Debug Agent

Not Yet Tested

This agent is built and configured but hasn't been tested yet - I need to get far enough along with the PeerTalk SDK to have platform-specific bugs to debug. The concept is solid based on the workshop patterns.

This agent auto-triggers when you mention platform-specific issues like "works on Linux but crashes on SE/30". It:

  1. Fetches logs from the Classic Mac via the MCP server
  2. Compares them side-by-side with POSIX logs
  3. Reads the corresponding source files
  4. Checks for common pitfalls (ISR safety violations, byte ordering, alignment issues)
  5. Reports the exact fix with file:line references

The agent knows about MacTCP ASR callback rules, OpenTransport notifier restrictions, AppleTalk ADSP completion patterns, and the general ISR safety requirements. When it spots a divergence in the logs, it can usually pinpoint whether it's a byte order issue, a missing state check, or an ISR violation, and suggest the fix with exact line numbers.

View the agent definition

#Starter Template

I want you to be able to follow along at home if you have Claude Code.

I've set up a branch called starter-template at the same starting point I'm working from. Anyone with Claude Code and basic knowledge can clone the project, check out that branch, and have a go at building the PeerTalk SDK themselves using the customisations I've built. The starter template guide walks through getting set up.

The implementation plans are there, the MCP server is configured, the skills are ready to use. It's a full working example of how to structure a Claude Code project with proper customisations. Once you've got the starter template set up, here's how the customisations work together - all the planning is done, the phase files have the implementation tasks ready to go, and the Claude Code customisations provide the tools to execute those plans.

#Implementing

To implement Phase 1, Session 1.2:

/implement 1 1.2

Claude gathers the session spec, checks dependencies, implements the tasks using the platform rules and Classic Mac API documentation, builds and tests the code, and marks the session as complete.

#Deploying

You can use the skills directly or just ask in natural language. "Deploy the demo apps to the Performa 6200 Mac" works just as well as:

/build package          # Build binaries for all platforms
/deploy se30 mactcp     # Deploy to SE/30 via FTP

You'll see the code getting compiled in the Docker container using the Retro68 toolchain, then deployed via FTP. The FTP directories on the actual Mac get populated automatically (or any FTP server, really). You can watch the Applications folder fill up with test builds as Claude deploys them.

#Testing on Real Hardware

Currently Testing

I'm actively testing this workflow. The tooling is working well - I've confirmed I can deploy demo apps to the Classic Macs via FTP and execute them remotely. The PeerTalk SDK hasn't progressed far enough yet to have the demo chat app, but the deployment and execution infrastructure is solid.

For running tests and fetching logs:

/execute se30           # Run the app via LaunchAPPL
/fetch-logs se30        # Get the PT_Log output

The cross-platform debug agent can compare logs between POSIX and Classic Mac builds to pinpoint platform-specific issues.

#What I've Learnt

The workshop made me think about how I approached building the Cookie prototype (project page). For Cookie, I got the plan down pretty solid and then did a series of simple prompts saying "implement phase x session y". It worked, but there wasn't much rigour around following the plan or tracking progress.

For PeerTalk, I'm using a specific skill tailored to implement the plans with much more rigour: following the plan exactly, tracking progress in the phase files, testing thoroughly, deploying to real hardware, all using a suite of tooling from the MCP server and other skills to help build out the SDK.

I've already tested the tooling to build the first phase of the plan and it's working really nicely, leading to much better output. The workshop gave concrete examples of MCP servers, skills, and agents which got me to make my own.

Rather than just having conversations with Claude, you can build project-specific tooling that gives Claude exactly the context and capabilities it needs for your domain. For PeerTalk, that meant:

  • Automated deployments to real Classic Mac hardware via FTP, with remote execution over TCP
  • Auto-triggered debugging with tooling to automatically fetch logs from various real Macs running the application
  • Structured implementation workflows that follow the phase plans
  • Automated verification hooks that catch bugs before they hit real hardware

If you're working on a project with Claude Code and find yourself repeating the same context or workflows, it's worth looking into MCP servers and skills. The CLAUDE-CODE-SETUP.md in the PeerTalk repo shows the full structure.