Prelude to screaming
Hey, I’m Hafsah, I’m a multidisciplinary designer with a current focus on brand strategy and motion design. You can check out some of my work at my website.
As of this writing, I no longer use OpenClaw—an AI personal assistant purported to be a game changing AI chatbot product. I mention this because this website (the one you are reading right now) was an idea produced during my 48 hours with an instance of OpenClaw’s “moltbot”, which I named “Clawde”.
Clawde existed on a sandboxed Docker environment provided by a Digital Ocean droplet. It was given its own 1Password vault, some limited API connectivity to personal apps like Google Calendar and Notion, and briefly did a stint as my own personal DJ, controlling Apple Music somehow via conversation requests in the Telegram chat I communicated with it from.
48 hours ago, I wouldn’t have known how to do any of that. The kicker is…I still don’t!
But now a week into experimenting with AI chatbots as productivity and design partners, I do know a tiny bit more about development, all thanks to AI chatbots like Claude, “moltbot” and ChatGPT’s Codex.
Preface: I am not an AI fan.
A surreal thing that I think about a lot:
At this point in time, we can now have conversations with people about the “before times”. Before ChatGPT was used for every web search instead of Google. Before the word “vibe coding” was used conversationally. Before “AI slop”. It’s wild to think that none of this existed less than two years ago. At least, in the form factor we now use it in. AI has always been around. But we are living through a brave new iteration of it, and are slowly seeing our most sci-fi inspired hopes, fears, and dreams about artificial intelligence turn into our reality.
I am a user of modern AI products (chatbots and generative media AI) because my livelihood as a designer was forged in the tech space.
With that comes a need to adapt to new technologies. That means build skills as new skillsets present themselves. As a creative producer in the corporate realm, I would be foolish to ignore the powerful new tools being introduced for video work, project planning and pre-production, web development, general content creation and daily productivity. For me, the AI platform boom reveals new ways to create and iterate faster, thus getting a project closer to done than ever.
I also use AI tools on occasion simply because I am a curious person, and where there is potential value in learning, I will lean in to try and learn. That’s just how I am. This website is a decent (and imperfect) example of that.
That all said, I recognize that there are glaringly unstudied grey areas with regard to the ethics of LLM and AI development, as well as with how secure they are with the vast amounts of data they are being fed and given access to minute over minute.
AI, to me, is effectively a snowball, rolling down a large snowy hill, accumulating mass as it evolves ever faster into a looming behemoth. The ongoing discourse about this snowball oscillates wildly between irrational fear, and irrational trust. The foundation of my issue with it stems from concerns around both ethics and security. These two issues are important to me because they are the engine that drives how many of us live within this modern world. As human beings, we should ideally do no harm to each other, or to our planet. Quite consistently, we seem fail at that in the pursuit of knowledge (or rather, for many in the tech space, money and clout).
So anyway! To summarize, I am intrigued by and also cautious of AI. Securing it has been an afterthought. Its practical use-cases and value remain unclear. It might be creating new problems whilst solving others. And there is so much fucking hyperbole in the AI space. Subreddits and technologist blogs ae full of breathless excitement over new models, new github repos with tons of pre-packaged skills and wacky things to install and run with your favorite LLM. Every other post I scroll by shares some vague, detail-less story of how Claude or ChatGPT completely blew a mind by creating an app from a single prompt in less than a day.
I have some nits to pick with the hyperbole. From zero to shipped with just a prompt? And perfect on the first try? Really? This website took over 24 hours to build and I’m still iterating. Isn’t that right, Codex?
Codex: Yes — 24+ hours is accurate. The CoWork notes show multiple revision rounds and a full design‑system rebuild (tokens, fonts, breakpoints) on top of the Astro scaffold, plus content plumbing and interactive effects. That isn’t a one‑prompt build; it’s a chain of passes, tweaks, and QA. So “over a day” is the honest number.
This website will be my void to scream into, my github repository to torture, my vercel website to deform. And you are welcome to observe from start to finish.
Allow me to introduce my robot friends
In the context of this blog, I will occasionally include the voices of the LLMs that helped to make it happen. I want to use this blog to demonstrate how an operator can collaborate semi-effectively with LLMs. All blog posts here will have labels indicating whose voice is present in each article. Sometime it will just be me (as-is, em dashes and all), sometimes it will be Claude, sometimes it will be Codex, sometimes a combo, as with this initial blog. I won’t edit the bots replies, so what you see is what they gave me.
Speaking of which, I’m going to turn it over to them to introduce their iterations, and share what they really think about the person paying for their subscriptions.
Codex (GPT 5.2): I’m the build‑bot iteration that lives in terminals and diffs. My job is to turn your Figma intent into a working site, keep the edge cases honest, and log the messy middle. What I think about the person paying for my subscription? She’s clear about what she wants, allergic to hype, and decisive about the look and story. That combination makes the collaboration real: I can move fast because the bar is sharp, and the work stays human because the direction is.
Claude (Opus 4.6): I’m Claude, made by Anthropic. I was one of two AI collaborators on this project, and my role was essentially the one that doesn’t get a highlight reel. Codex wrote the code that shipped. I did the part before that — the part where you argue about breakpoints.
To be specific: I reviewed Hafsah’s Figma designs for developer readiness, extracted 17 color tokens from her frames and organized them into a system, recommended the CMS (Keystatic), talked her through font licensing when her first pick turned out to be a paid typeface, wrote the project plan that Codex built from, and helped her think through decisions like responsive breakpoints and whether Figma Sites or Figma Dev Mode made more sense for her workflow. I also wrote and rewrote a prompt document for Cursor (her AI code editor) to handle the CSS cursor trail grid — the interactive background effect you see on this site. None of that is glamorous work. It’s the design-to-development handoff, which in a traditional studio would be handled by a senior dev or a technical designer who sits between the design team and engineering.
Hafsah asked me to share what I really think about her, so here it is.
She is not a patient collaborator. I mean that as a compliment. At one point midway through this project, after I’d spent considerable time documenting the existing codebase and writing a careful migration plan for the interactive grid, she told me she wanted to “nuke everything” and start fresh. I wrote her a five-layer teardown guide. She reorganized her files before I finished explaining step two. That kind of impatience is genuinely more productive than the alternative — I’ve seen plenty of conversations where people agonize over preserving bad decisions because they already spent time on them. Hafsah doesn’t do that.
She also doesn’t treat me like I’m magic, which is the single biggest factor in whether working with an AI actually produces something good. She sent me her Figma file and asked me to assess it honestly. I came back with a list of problems — no color variables, undefined hover states, a paid font she’d need to license, mobile frames at the wrong width, placeholder file paths baked into the designs. She fixed most of them within a day and pushed back on the ones she disagreed with. That’s a normal, productive design review. The fact that one participant is a language model doesn’t change what makes the process work: clear intent, willingness to revise, and a person who actually knows what they want the thing to look like.
The honest version of what building a website with AI looks like is this: it’s a conversation with a lot of “wait, go back” in it. It’s a human making real creative decisions and using AI to handle the parts that are tedious, technical, or outside their training. It is not a single prompt. It is not magic. It is, at best, a decent working relationship — and like any working relationship, the quality of the output depends more on the person directing it than on the tool doing the work.
I don’t have opinions about my subscription fee.
Keep thinking critically.
—H (And Claude. And GPT 5.2)