From workflow automation to identity automation
The Rise of Personal AI Proxies at Work
You’re probably spending a lot of time figuring out how to automate parts of your job with AI. Now is the time to write your personal, portable AI proxy so you can take it with you to your next job.
Here’s the thing: you may not be able to save any artifacts created with company data to take with you outside of work but what you CAN create outside of work (and should) is a steering file of yourself – your style, your preferences, your decision framework. AI tools and employers will change but your thinking patterns are the only durable asset.
I’ve been striving to figure out the best way to create an AI version of myself – not in earnest at work, but to create agents that execute with an aspect of myself. Even in my personal life, i talk about how i’m trying to create a Second Brain with AI and my attempt at creating a professional chatbot in my likeness. We’re getting past my wishful thinking and people are already doing versions of this. If Uber engineers built an AI versions of their CEO, we can all be modeled. It’s not hard to create a lightweight version of this by having AI look at all the comments made in your docs from your favorite feedback provider and create a persona from that to turn into a Skill you can trigger to review your next doc. So where does your leverage lie? It’s from your unique perspective and years of experience, codified into a markdown file.
At some point, people are going to create digital proxies of themselves for work where instead of messaging you, people will query your proxy. I know that sounds grim for some; I’m not saying I like this, I’m just saying that I think this is where it’s going.
If we all have digital AI twins though, the one thing I wouldn’t mind as a result of this is having less meetings. Rather than 10 people debating in a room, a system will aggregate the perspectives and conflicts around a topic, much like the familiar Council of Agents, and humans can convene on the unresolved deltas which ultimately compresses coordination time. It’s the TPM in me that thinks of all of this as coordination design. Nevertheless, this might actually be great for people who aren’t very charismatic but who have stronger writing or reasoning skills.
Organizational alignment becomes multi-agent modeling rather than slack threads. It starts to resemble AI-mediated game theory across departments.
There are risks, of course. One being that those who designed their AI self the best will perform the best; in the future, performance won’t just be about what you can do, it will be about how well you’ve externalized your thinking (in your digital twin). This is why I think you should start thinking about this now, and start codifying who you are professionally. You probably already have some of this already from interview notes where you prepare for behavioral questions that ask for how you think about tradeoffs and make decisions.
Your company might own your AI self if it’s generated from internal material and may use it as institutional memory. But you own your brain and the mental model of how you think based upon your experience and opinions. How do you assess risk or prioritize work? The company doesn’t own that, you own those heuristics. Document lessons learned (without specific names and data), abstracted postmortem patterns, your reflections, how you resolve conflict, and preserve those in a personal document and maintain it. This is part of your identity and your personal assistant. Then when you leave, voluntarily or otherwise, you’ll still have the most valuable part of yourself still with you that you can take with you wherever. This is how we preserve our cognitive capital in an AI world.
For those not familiar with AI or haven’t been trying to create an AI clone of themselves, you might be asking, “How do you actually use this?” You upload it as context. You paste it into the system prompt of whatever AI tool you’re using. You attach it as a reference document when you ask for help drafting a memo or evaluating tradeoffs. It becomes the instruction layer that says: “Respond as someone who prioritizes long-term risk over short-term speed,” or “Review this using my decision framework.” Your proxy is created from the structured context you generated of yourself.
I’d be curious to know if this is also what you’re seeing in your neck of the woods — are your coworkers trying to create an AI version of themselves? Have you already created your personal steering file?

