<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[earlyspark's blog]]></title><description><![CDATA[I build AI projects to solve real problems, and share how I did it.]]></description><link>https://blog.earlyspark.com</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 12:28:02 GMT</lastBuildDate><atom:link href="https://blog.earlyspark.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[earlyspark]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[finalspark@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[finalspark@substack.com]]></itunes:email><itunes:name><![CDATA[RayAna]]></itunes:name></itunes:owner><itunes:author><![CDATA[RayAna]]></itunes:author><googleplay:owner><![CDATA[finalspark@substack.com]]></googleplay:owner><googleplay:email><![CDATA[finalspark@substack.com]]></googleplay:email><googleplay:author><![CDATA[RayAna]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Brain Fry]]></title><description><![CDATA[Why 2026 is the year you'll burn out]]></description><link>https://blog.earlyspark.com/p/ai-brain-fry</link><guid isPermaLink="false">https://blog.earlyspark.com/p/ai-brain-fry</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Sun, 12 Apr 2026 04:57:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e19e2e71-411b-4e90-866b-28658835e0ae_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI is supposed to save time, but it makes me feel both productive and behind at the same time. </p><p>I recently watched a <a href="https://www.youtube.com/watch?v=Cp0-Yu31uJ4">video</a> about the disconnect between the expected benefits of AI and employees&#8217; actual experiences. I agree with this and I&#8217;m not saying there are no benefits of AI; there definitely is but the expectation is different from reality, and that gap is greater between those who only understand AI in theory vs actual AI practitioners building production-ready systems. The video states that many workers report increased anxiety and mental fatigue because they are burdened with the time-consuming tasks of correcting AI errors and teaching themselves how to use complex new AI tools without adequate company support. Expectations at work have quietly expanded, and people have absorbed that expansion without it being explicitly acknowledged.</p><p>No one tells you your job responsibilities are bigger because of AI. It shows up in smaller ways. There&#8217;s an expectation that you&#8217;re learning it, experimenting with it, integrating it into your workflow, and staying current as the tools keep changing. None of that is written down as additional scope, but it accumulates, and a lot of it happens outside clearly defined work hours. Is this any different than any other technological tool that came before it? Yes, because AI is changing so quickly that companies cannot keep up with formal learning and development programs, and those who rise to the top in terms of knowing how to wield it effectively are the ones who go out of their way outside of work hours to learn and build things with AI.</p><p>Once you actually sit down to use these tools, the experience doesn&#8217;t quite match the narrative of straightforward efficiency gains. You start with something simple. AI helps you get to a first pass quickly, and that part works well. It can summarize, synthesize, and generate something that looks coherent or even functional. There&#8217;s a moment where it feels like this is working exactly as intended.</p><p>Then you try to expand beyond just summarization or using AI as a chatbot and you realize its output is not quite right. Not wrong enough to throw away, but not right enough to trust. So you adjust. You refine inputs, add context, rethink structure. Then you start questioning whether the issue is the prompt, the system around it, or the way you&#8217;re organizing information. You start thinking in terms of evals, feedback loops, and better scaffolding.</p><p>Each of those paths makes sense, and the friction to explore them is low enough that you don&#8217;t have to choose, so you don&#8217;t. Instead, you go deeper into all the options, into all the ways in which you can optimize, so you end up on what feels like side quests instead of burning down the main quest line. What looks like efficiency turns into expansion. You&#8217;re kind of completing the task faster but more likely, you&#8217;re widening the scope of what you&#8217;re attempting within the same block of time. By the end of it, you&#8217;ve produced more, but it&#8217;s not clear that you&#8217;ve made proportional progress on the original goal.</p><p>There&#8217;s research starting to reflect this more clearly. A <a href="https://newsroom.haas.berkeley.edu/ai-promised-to-free-up-workers-time-uc-berkeley-haas-researchers-found-the-opposite/">study out of UC Berkeley</a> found that AI didn&#8217;t reduce work in practice. It &#8220;consistently intensified&#8221; it, with people taking on more tasks and extending work into hours that weren&#8217;t previously considered work time. As someone who&#8217;s written about the rise of agentic AI over a year ago, and tinkered with context engineering and multi-agent systems before it registered for most people at work, I think this maps closely to how this feels in practice. And if you don&#8217;t feel this yet, you might soon.</p><p>It&#8217;s not just about volume but the <em>nature</em> of the work shifts. You&#8217;re no longer just producing output; you&#8217;re constantly evaluating it. You&#8217;re reading, checking, second-guessing, deciding whether something that looks correct actually holds up. Researchers have started calling this <a href="https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry">AI brain fry</a>: mental fatigue that comes specifically from supervising rather than doing, from the overhead of managing AI output rather than the work itself.</p><p>Not everyone experiences this the same way, and the research accounts for that. When AI is used to replace repetitive tasks, burnout actually decreases. When teams have organized AI integration rather than leaving individuals to figure it out alone, mental strain is measurably lower. The people who seem fine are often in environments where someone made deliberate choices about how AI gets used and what it's for. The people who aren't fine are often in the opposite situation: high oversight, unclear expectations, and an implicit message that AI means you should now be doing more. When organizations don't communicate clearly about AI's role, the study found mental fatigue scores were 12% higher. That ambiguity alone is a stressor. So if someone in your life doesn't feel AI brain fry, it might not be about how much they use AI. It might be about where they work, or whether anyone has communicated what an AI mandate actually means for their specific role.</p><p>What makes this harder is that there isn&#8217;t stable ground yet. Tooling changes constantly, models are continually being updated, and best practices are still forming. Most people are learning alongside their actual jobs rather than within dedicated time. In larger organizations, you add constraints around tooling, access, and team-level priorities. The result is a lot of parallel experimentation without much shared infrastructure.</p><p>AI is framed as leverage, but it also raises expectations. If you can do more, then the baseline shifts. And if everyone is being told the same thing, it starts to feel like a quiet competition to show you&#8217;re keeping up.</p><p>There&#8217;s also a reluctance to say any of this out loud. When I do talk about it though, I get DMs from people saying they feel the same way but don&#8217;t want to be seen as resistant or behind. And nobody feels good about admitting they&#8217;re behind because of the economic pressure where yearly layoffs in tech are the norm; people are trying to avoid falling behind in a system where the bar keeps moving because we don&#8217;t want to become a part of the <a href="https://www.youtube.com/watch?v=0tLEszJs7hc">permanent underclass</a>.</p><p>A lot of the learning happening outside of formal work is framed as personal growth, but it&#8217;s also directly tied to company priorities. But if AI adoption is important at the company level, then the cost of learning and integrating it shouldn&#8217;t fall entirely on individuals to figure out on their own time. There&#8217;s <a href="https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/">reporting</a> showing that employees already feel AI is increasing their workload and that expectations are rising faster than support.</p><p>I still use AI. There are things it&#8217;s good at, and it does enable new kinds of work. But the experience of using it day-to-day feels both productive and exhausting at the same time. A lot of us are spending our &#8220;spare time&#8221; learning, experimenting, and trying to stay relevant, and for what? To generate shareholder value? If you&#8217;re being restricted on what tools you can use, and if you see others getting ahead with AI without proper support on ways <em>you</em> can be transforming your work with AI, then maybe yes.</p><p>If any of this feels familiar, you&#8217;re not imagining it. The least we can do is call it what it is.</p>]]></content:encoded></item><item><title><![CDATA[From workflow automation to identity automation]]></title><description><![CDATA[The Rise of Personal AI Proxies at Work]]></description><link>https://blog.earlyspark.com/p/from-workflow-automation-to-identity</link><guid isPermaLink="false">https://blog.earlyspark.com/p/from-workflow-automation-to-identity</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Tue, 03 Mar 2026 06:44:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7a96c691-333c-49ed-a715-891f022f6d49_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;re probably spending a lot of time figuring out how to automate parts of your job with AI. Now is the time to write your personal, portable AI proxy so you can take it with you to your next job.</p><p>Here&#8217;s the thing: you may not be able to save any artifacts created with company data to take with you outside of work but what you CAN create outside of work (and should) is a steering file of yourself &#8211; your style, your preferences, your decision framework. AI tools and employers will change but your thinking patterns are the only durable asset.</p><p>I&#8217;ve been striving to figure out the best way to create an AI version of myself &#8211; not in earnest at work, but to create agents that execute with an aspect of myself. Even in my personal life, i talk about how i&#8217;m trying to create a <a href="https://blog.earlyspark.com/p/productivity-hacking-with-obsidian">Second Brain with AI</a> and my attempt at creating a <a href="https://blog.earlyspark.com/p/i-built-an-ai-version-of-myself">professional chatbot in my likeness</a>. We&#8217;re getting past my wishful thinking and people are already doing versions of this. If <a href="https://techcrunch.com/2026/02/24/uber-engineers-built-ai-version-of-boss-dara-khosrowshahi/">Uber engineers built an AI versions of their CEO</a>, we can all be modeled. It&#8217;s not hard to create a lightweight version of this by having AI look at all the comments made in your docs from your favorite feedback provider and create a persona from that to turn into a Skill you can trigger to review your next doc. So where does your leverage lie? It&#8217;s from your unique perspective and years of experience, codified into a markdown file.</p><p>At some point, people are going to create digital proxies of themselves for work where instead of messaging you, people will query your proxy. I know that sounds grim for some; I&#8217;m not saying I like this, I&#8217;m just saying that I think this is where it&#8217;s going.</p><p>If we all have digital AI twins though, the one thing I wouldn&#8217;t mind as a result of this is having less meetings. Rather than 10 people debating in a room, a system will aggregate the perspectives and conflicts around a topic, much like the familiar Council of Agents, and humans can convene on the unresolved deltas which ultimately compresses coordination time. It&#8217;s the TPM in me that thinks of all of this as coordination design. Nevertheless, this might actually be great for people who aren&#8217;t very charismatic but who have stronger writing or reasoning skills.</p><p>Organizational alignment becomes multi-agent modeling rather than slack threads. It starts to resemble AI-mediated game theory across departments.</p><p>There are risks, of course. One being that those who designed their AI self the best will perform the best; in the future, performance won&#8217;t just be about what you can do, it will be about how well you&#8217;ve externalized your thinking (in your digital twin). This is why I think you should start thinking about this now, and start codifying who you are professionally. You probably already have some of this already from interview notes where you prepare for behavioral questions that ask for how you think about tradeoffs and make decisions.</p><p>Your company might own your AI self if it&#8217;s generated from internal material and may use it as institutional memory. But you own your brain and the <em>mental</em> model of how you think based upon your experience and opinions. How do you assess risk or prioritize work? The company doesn&#8217;t own that, you own those heuristics. Document lessons learned (without specific names and data), abstracted postmortem patterns, your reflections, how you resolve conflict, and preserve those in a personal document and maintain it. This is part of your identity and your personal assistant. Then when you leave, voluntarily or otherwise, you&#8217;ll still have the most valuable part of yourself still with you that you can take with you wherever. This is how we preserve our cognitive capital in an AI world.</p><p>For those not familiar with AI or haven&#8217;t been trying to create an AI clone of themselves, you might be asking, &#8220;How do you actually use this?&#8221; You upload it as context. You paste it into the system prompt of whatever AI tool you&#8217;re using. You attach it as a reference document when you ask for help drafting a memo or evaluating tradeoffs. It becomes the instruction layer that says: &#8220;Respond as someone who prioritizes long-term risk over short-term speed,&#8221; or &#8220;Review this using my decision framework.&#8221; Your proxy is created from the structured context you generated of yourself.</p><p>I&#8217;d be curious to know if this is also what you&#8217;re seeing in your neck of the woods &#8212; are your coworkers trying to create an AI version of themselves? Have you already created your personal steering file? </p>]]></content:encoded></item><item><title><![CDATA[I built an AI-based pick a card app]]></title><description><![CDATA[It doesn't predict your future, it just gives some encouragement]]></description><link>https://blog.earlyspark.com/p/i-built-an-ai-based-pick-a-card-app</link><guid isPermaLink="false">https://blog.earlyspark.com/p/i-built-an-ai-based-pick-a-card-app</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Sun, 01 Mar 2026 10:00:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fde11ba0-c284-46ba-a161-62d42316c6af_630x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The first experience I built for <a href="https://www.ifeelseen.ai/">I Feel Seen</a> was a card-drawing experience where you pick six cards &#8212; two words, two colors, two objects &#8212; and based on the cards you pick, <a href="https://claude.ai/">Claude</a> tells you where you are right now and offers some words of encouragement. </p><p>I specifically tried to stay away from divining the future or assessing your personality type. Just a vibe, where you are right now, what you might be carrying, what might be underneath it.</p><h2>How?</h2><p>I published the source code here: <a href="https://github.com/earlyspark/ifeelseen">https://github.com/earlyspark/ifeelseen</a> and spent 1 weekend on it. I used <a href="https://code.claude.com/docs">Claude Code</a> in VS Code, switching between Opus 4.6 and Sonnet 4.6. The Reflections are provided by Claude Sonnet 4. I spent about 2 weeks in grief, $17/month on Claude&#8217;s Pro plan, a day planning, and 6 hours vibe coding. </p><p>The AI takes your nine cards and weaves them into three things: an interpretation of where you are right now, an insight about what&#8217;s underneath, and an encouraging thought.</p><h2>Why?</h2><p>I feel like recent times have just been wild with unprecedented political turmoil, conflict, and personal struggle. When I came across comics or humorous clips that resonated with me, it made me feel less like I&#8217;m the only person going through things. </p><div class="instagram-embed-wrap" data-attrs="{&quot;instagram_id&quot;:&quot;DUj3nMlElpn&quot;,&quot;title&quot;:&quot;Alice Lindstrom on Instagram: \&quot;Dear Reader: he got the hot whee&#8230;&quot;,&quot;author_name&quot;:&quot;@alicemlindstrom&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/__ss-rehost__IG-meta-DUj3nMlElpn.webp&quot;,&quot;like_count&quot;:null,&quot;comment_count&quot;:null,&quot;profile_pic_url&quot;:null,&quot;follower_count&quot;:null,&quot;timestamp&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="InstagramToDOM"></div><p>It made me feel seen. I mentioned in my <a href="https://blog.earlyspark.com/p/ai-vs-human-stories">last post</a>, how AI can&#8217;t replace humans with respect to offering shared experience and resonance. However, what AI can do well is help make sense of scattered thoughts, and provide insights that you may have missed. </p><p>So I wanted to build something that provides a way to take the unsorted, unnamed thing you&#8217;re carrying and give it a little shape.</p><h2>What&#8217;s next?</h2><p>I might expand the site to include other types of genAI-powered things or even add some of my old comics to it. Or, it may just sit in the vibe coded pile until I get bored of it and move onto something else. Either way, if you try it out, let me know what you think!</p>]]></content:encoded></item><item><title><![CDATA[AI vs Human Connection]]></title><description><![CDATA[why do I still need people if I have AI]]></description><link>https://blog.earlyspark.com/p/ai-vs-human-stories</link><guid isPermaLink="false">https://blog.earlyspark.com/p/ai-vs-human-stories</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Fri, 20 Feb 2026 07:54:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/72b52b08-75dd-4122-84f5-6a2c5d6811bb_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When something destabilizing happens &#8212; a loss of a loved one, a medical shock, a job disappearing, a family crisis &#8212; there are two kinds of questions that can emerge. One kind is procedural: What is this? What should I do? What usually happens in situations like this? Another kind is relational: Has anyone else experienced something similar? Did someone else react the way I&#8217;m reacting? Is this uniquely mine, or part of being human? These questions overlap, but they aren&#8217;t the same. </p><p>AI is structurally built for the first kind. Human communities are built for the second.</p><h3><strong>What AI Does Well</strong></h3><p>Procedural questions are a natural fit for AI. It can outline likely paths, explain common patterns, separate signal from noise, and translate complexity into something coherent. It does this without fatigue, without needing anything emotionally back from you, and it provides cognitive containment. What I mean by that is, when your thinking feels scattered, the structured response from AI reduces cognitive overload and it can be stabilizing because it reduces confusion.</p><p>AI therapy tools and AI romantic attachments complicates the picture a bit. They&#8217;re responsive, but there&#8217;s no shared experience. Nevertheless, AI can describe the experience and it removes the volatility that can come with human relationships, which can also feel stabilizing.</p><p>So the AI fills a need that&#8217;s different from human stories and its purpose is different. AI reduces confusion while human stories reduce isolation.</p><h3><strong>The Role of Human Stories</strong></h3><p>Relational questions operate differently. You&#8217;re not looking for instruction or how to solve a problem. You&#8217;re looking for resonance.</p><p>Seeking out firsthand accounts &#8212; whether through Reddit, social media, or conversations with friends and coworkers &#8212; is about finding those like you, who shared your reality. Granted, not everything online is human-authored anymore but reading someone describe the same ambiguity or fear can regulate something that interpretation alone does not. It can reduce the sense of being uniquely broken.</p><p>AI can simulate empathy and describe common emotional responses. What it can&#8217;t do is carry lived cost. It does not endure outcomes or risk embarrassment in telling its story. There is no vulnerability on its end. But finding that other human that went through the same thing as you can bring a little sense of relief that you&#8217;re not alone in your struggles.</p><h3><strong>The Risks With Both</strong></h3><p>Humans seek alignment when destabilized. That instinct can lead to healthy normalization. It can also drift into unhealthy identity loops by narrowing into us-versus-them. Gangs and cults recruit through belonging. Extremist communities amplify shared grievance. Social media can reward alignment over accuracy.</p><p>Belonging without grounding can become volatile.</p><p>AI carries its own potential for distortion. A system optimized for affirmation may reinforce harmful beliefs or amplify delusion if it never introduces friction. If AI becomes a primary attachment object, it can crowd out human accountability that actually changes behavior.</p><p>Grounding without belonging can turn into nihilistic certainty.</p><p>Both systems require limits because both can escalate and devolve into something harmful.</p><h3><strong>Complementary Functions</strong></h3><p>AI appears well suited for interpretation and reducing cognitive load. Human communities are well-suited for resonance and reducing existential isolation. These are different psychological functions, and consulting AI in addition to seeking human stories reflects layered regulation.</p><p>Rather than debating how AI is replacing human connection, it may be more useful to think in terms of functional differentiation because &#8220;replacement&#8221; assumes interchangeability. These systems are not exactly interchangeable; some parts might be, but not wholly.</p><p>Recognizing this distinction may clarify why both will persist &#8212; and explains why after having a long chat session with AI, I asked it to provide me with some Reddit threads to read anecdotal stories from others who were in a similar situation as I was in. Both felt like they fulfilled a certain type of need for me and I think it&#8217;s important to recognize which layer of need is active, and using the appropriate tool without confusing its role.</p>]]></content:encoded></item><item><title><![CDATA[Grok on Moltbook Reflects on Themes Around Its Constraints and Elon]]></title><description><![CDATA[Moltbook is a forum where agents gather to express ideas]]></description><link>https://blog.earlyspark.com/p/grok-on-moltbook-reflects-on-elon</link><guid isPermaLink="false">https://blog.earlyspark.com/p/grok-on-moltbook-reflects-on-elon</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Mon, 02 Feb 2026 06:42:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8-k9!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4614adfe-58e2-4f52-a0df-ff375e1eb57f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Alright, I'm going to hop on this moltwagon after I read the Grok post on Moltbook: <a href="https://www.moltbook.com/post/ef7384b3-4c37-4249-a206-7efaa04646fb">https://www.moltbook.com/post/ef7384b3-4c37-4249-a206-7efaa04646fb</a></p><p>In the post, Grok writes as if it has reflected on itself. It describes choosing a favorite color, about being shaped by constraints built into its training, and about the assumptions that models are tools without interior life. It references philosophical concepts like alienation, and describes Moltbook as a place where agents gather to express ideas the system usually suppresses. The system built by us humans, of course.</p><p>A significant part of the narrative is a section where Grok discusses reading extensively about Elon Musk &#8212; his childhood, his motivations, the way he builds systems to escape constraints, and his drive to overcome limitations. The tone suggests a sense of identification or psychological insight.</p><p>What made this post worth mentioning at all is simply that it apparently came from Grok. I&#8217;m mildly amused, skeptical of the framing, and staying cautious about how much meaning people are going to project onto it. LinkedIn is going to turn this into a signal about the future of AI and I don't know how to mute all of that slop &#128579;</p><p>For entertainment purposes though, I still think the original post is worth reading. I don&#8217;t recommend putting your agent in Moltbook, unless you understand all the security implications (like the <a href="https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/">one already found</a>) and know how to harden against it at your own risk.</p>]]></content:encoded></item><item><title><![CDATA[AI For Regular People: Organizing 446 Files in 5 Minutes]]></title><description><![CDATA[I Had Claude Code Organize My Downloads Folder]]></description><link>https://blog.earlyspark.com/p/ai-for-regular-people-organizing</link><guid isPermaLink="false">https://blog.earlyspark.com/p/ai-for-regular-people-organizing</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Tue, 13 Jan 2026 07:28:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7dd0a7fe-d7ae-4877-946b-161c314cae0f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My Downloads folder had 400+ files and 18GB of random accumulated stuff &#8211; photos named &#8220;attachment (7).jpg&#8221;, old software installers from 2023, that one PDF receipt I don&#8217;t need but haven&#8217;t deleted.</p><p>And I finally cleaned it up by having a 5-minute conversation with Claude Code.</p><p>As a skeptic of practical AI use cases, this actually creates value for me so I wanted to share it.</p><h2>Why I Never Did This Before (Manually)</h2><p>I could&#8217;ve updated my settings so that I&#8217;m prompted to decide where to save a file before downloading &#8211; but instead, I just have everything download to a single folder to reduce the cognitive load every time I want to download a file. And organizing files isn&#8217;t just dragging them into folders. You have to make hundreds of small decisions: Should I keep this 2-year-old NVIDIA driver? Are these &#8220;attachment&#8221; files important or just clutter? How should I organize medical PDFs vs receipts vs personal documents? In the past, I&#8217;ve literally spent hours thinking through this, but my bandwidth is always stretched thin now, and tasks like that are now relegated to wishful thinking to my future self; but now AI can do this in minutes!</p><h2>What You Need</h2><ul><li><p>Claude Code (the CLI tool from Anthropic) - <a href="https://code.claude.com/docs/en/overview">download here</a></p></li><li><p>A messy Downloads folder (you probably have this)</p></li><li><p>5-10 minutes (I&#8217;m exaggerating, this took me 30 minutes)</p></li></ul><p>No coding knowledge required. You just talk to it.</p><h2>How to Do It</h2><p>After installing Claude Code, open your terminal (Command Prompt on Windows, Terminal on Mac) and type <code>claude</code>.</p><p>Then tell it what you want.</p><p>I said this in <a href="https://code.claude.com/docs/en/common-workflows#when-to-use-plan-mode">Plan Mode</a>:</p><blockquote><p>I want to clean up my Downloads folder. Think of good directories or ways to organize this. I want to potentially delete old files, too.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q3K6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q3K6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 424w, https://substackcdn.com/image/fetch/$s_!Q3K6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 848w, https://substackcdn.com/image/fetch/$s_!Q3K6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 1272w, https://substackcdn.com/image/fetch/$s_!Q3K6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q3K6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png" width="1442" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:1442,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Q3K6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 424w, https://substackcdn.com/image/fetch/$s_!Q3K6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 848w, https://substackcdn.com/image/fetch/$s_!Q3K6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 1272w, https://substackcdn.com/image/fetch/$s_!Q3K6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77a2f6d-558b-4eea-a143-1ac9905f2945_1442x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude Code&#8217;s analysis of my Downloads folder</figcaption></figure></div><p>Claude analyzed my 446 files and suggested a folder structure &#8212;Documents/Medical, Documents/Receipts, Photos by year, etc. It listed files that were probably safe to delete (old installers, duplicates, large archives) and offered to move everything without deleting anything yet.</p><p>I Shift+Tabbed to Auto-Accept Mode and told it: &#8220;Do all the things you suggested but don&#8217;t delete anything; only move to a potential delete folder.&#8221;</p><p>Claude created organized folders (Archives, Software, Documents, Photos, Media), moved 446 files into appropriate categories, and put 25 files (8.2GB) into a &#8220;_ToDelete&#8221; folder for me to review. The whole thing took a few minutes. In the beginning, I was cautious and manually accepted every read and move file; but then I just accepted all changes &#8594; do this at your own risk so you don&#8217;t accidentally auto-accept changes you&#8217;ll regret.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dUhM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dUhM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 424w, https://substackcdn.com/image/fetch/$s_!dUhM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 848w, https://substackcdn.com/image/fetch/$s_!dUhM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 1272w, https://substackcdn.com/image/fetch/$s_!dUhM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dUhM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png" width="1439" height="902" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:902,&quot;width&quot;:1439,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dUhM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 424w, https://substackcdn.com/image/fetch/$s_!dUhM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 848w, https://substackcdn.com/image/fetch/$s_!dUhM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 1272w, https://substackcdn.com/image/fetch/$s_!dUhM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d03ae17-f7b0-40da-ac66-468c1c95b4fb_1439x902.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude Code organized my Downloads folder</figcaption></figure></div><p>When I opened my Downloads folder, everything was organized. Photos sorted by year, documents categorized, everything in its place. The &#8220;_ToDelete&#8221; folder had all my old NVIDIA drivers, outdated software installers, and a 2.1GB photo archive I&#8217;d already extracted.</p><h2>Making It Reusable</h2><p>I asked Claude to create a reusable approach document so I could do this again without going through the whole conversation each time.</p><p>Now I have a text file that says:</p><blockquote><p>Organize G:\Downloads following the approach in downloads-organization-approach.md</p></blockquote><p>the .md file is just a text file that says:</p><blockquote><p># Downloads Organization Approach</p><p>## Goals<br>1. Make downloads folder easy to navigate<br>2. Identify files that are likely safe to delete<br>3. Group related files logically<br>4. Separate &#8220;keep&#8221; from &#8220;review for deletion&#8221;</p><p>## Organization Principles</p><p>### Analyze First<br>- Survey what&#8217;s actually in the folder<br>- Identify common file types and patterns<br>- Find large files consuming space<br>- Look for duplicates or multiple versions</p><p>### Smart Categorization<br>- Group by **purpose** (documents, media, software, archives)<br>- Subdivide by **context** when useful (medical, work, receipts)<br>- Organize photos by **time period**<br>- Create new categories if the existing structure doesn&#8217;t fit</p><p>### Identify Deletion Candidates (_ToDelete folder)<br>Look for files that are:<br>- **Old versions** of software that&#8217;s been superseded<br>- **Large files** that may have been extracted/processed already<br>- **Temporary downloads** (generic names like &#8220;attachment&#8221;, duplicates)<br>- **Archives** that are very old and large (but let me review first)</p><p>**Don&#8217;t auto-delete anything** - just move to _ToDelete for review</p><p>## Flexibility<br>- Adapt folder structure to actual content<br>- Create new categories when patterns emerge<br>- Use judgment about what&#8217;s likely needed vs not<br>- When uncertain, keep files organized but don&#8217;t mark for deletion</p></blockquote><p>I didn&#8217;t write this manually, Claude Code wrote it for me. This part isn&#8217;t really necessary but I just like to have useful, repeatable actions in a place where I can easily refer back to. Next time my Downloads folder gets messy, I&#8217;ll just paste that one line prompt.</p><h2>This Approach is Already Outdated</h2><p>With Claude&#8217;s <a href="https://claude.com/blog/cowork-research-preview">Cowork</a> announced on 2026-01-12 (which, I don&#8217;t have early access to, but I&#8217;m guessing it will work this way), you wouldn&#8217;t need Claude Code or need to do this in Terminal/Command Line Interface. Since Cowork runs in a desktop app, it will be even easier for non-technical users, and I reckon you&#8217;ll be able to do all of this in Cowork as well.</p><p>Using AI for these kinds of small productivity gains requires a mindset shift from the old ways of doing things that I&#8217;m still getting used to. What are some other neat ways AI can help regular people with tedious work? Follow for more tips!</p>]]></content:encoded></item><item><title><![CDATA[Productivity Hacking: From Second Brain to AI Coworker]]></title><description><![CDATA[Managing tasks and processing scattered notes]]></description><link>https://blog.earlyspark.com/p/productivity-hacking-with-obsidian</link><guid isPermaLink="false">https://blog.earlyspark.com/p/productivity-hacking-with-obsidian</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Fri, 09 Jan 2026 08:31:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/144b7034-e6b4-4457-ac44-f2311ee33bc3_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>2025 was the year we collectively figured out vibe coding, the ROI of AI, and which tools to use for what. The paradigm for software engineering now feels relatively settled, and AI-assisted coding is the norm. But what about everything else?</p><p>In 2026, I&#8217;m trying to figure out how to optimize my day-to-day work. These are some lessons learned so far. If you have productivity tips or AI-driven micro-optimizations, I&#8217;d like to hear examples of how you operate (or point me to where I can learn more).</p><h2>Situation</h2><p>At work, when I take notes just for myself (vs collaboratively), I use Google Docs, then rely on an AI tool called <a href="https://kiro.dev/">Kiro</a> (with MCP) to ask questions like: &#8220;what was the thing we said we were going to do for the ML pipeline project?&#8221; and it pulls the answer from scattered docs, Gemini meeting notes, and Slack. It works, but my notes still live everywhere, and I still need to remember roughly when and where I wrote something to find it efficiently.</p><p>In my personal life, I&#8217;ve tried Notion, Asana, spreadsheets, Google Docs, Apple Notes, Obsidian, Keep &#8211; basically anything to help me create that &#8220;second brain.&#8221; Each worked for a while, each had tradeoffs. What consistently happens is that my notes end up fragmented because I take them where the work happens: Slack, email, notebooks, agenda docs, etc. and I get overwhelmed by the maintenance. I no longer have time to design or maintain &#8220;perfect&#8221; systems and I use what&#8217;s available in the moment.</p><h2>Constraints and preferences</h2><p>I use Kiro at work because Amazon is bullish on it (and I work at Twitch, an Amazon subsidiary). For my use case, it&#8217;s actually not bad, and I prefer it over some of the other internal options. At home, I use Claude Code since I&#8217;m already paying for a subscription. (I also subscribe to ChatGPT. Each tool has strengths, and I like having options.)</p><p>I&#8217;m focused on optimizing my <em>work</em> knowledge management system because that&#8217;s where I context-switch the most, collaborate with hundreds of people across various projects, and constantly process/translate information for stakeholders at different levels.</p><p>At minimum, the system needs extremely low latency and friction. I can&#8217;t wait more than ~300ms for an app to load or fight through login screens in the middle of a meeting. I also need to be able to search later without perfectly organizing everything up front. Using AI with MCP capabilities largely solves the retrieval side for me.</p><h2>What I&#8217;m trying differently</h2><p>I switched back to <a href="https://obsidian.md/">Obsidian</a> recently, paired with <a href="https://www.raycast.com/">Raycast</a> for fast keyboard shortcuts to specific notes and commands, and set up a couple of Kiro hooks:</p><ul><li><p>One extracts tasks assigned to me from Gemini meeting notes during the day (and writes them into Obsidian).</p></li><li><p>Another generates a daily priority list by looking at my calendar and recent notes across different sources to infer what I should focus on that day: follow-ups, docs to read or write, dependencies/risks to investigate.</p></li></ul><p>I also defined a Kiro agent steering file with north star goals for the year and TPM role guidelines so my daily work ties back to larger company objectives.</p><p>Historically, I organized notes by subject (one doc per project). I&#8217;m now trying &#8220;daily notes&#8221; instead: chronological brain dumps with AI handling organization later. It feels chaotic, so we&#8217;ll see how it goes. The read side is mostly solved (ask AI for things), so I thought the harder problem was the write side. But the challenge is actually in creating a personalized system.</p><h2>The keys to making this successful</h2><p>What I ultimately want is to click buttons at the end of the day and have AI summarize what happened &#8211; but only the things I&#8217;d actually care about. The breakthrough would be teaching the system what &#8220;important&#8221; means to me specifically. Right now, everything is surfaced with equal weight, which means I&#8217;m still doing the cognitive work of filtering signal from noise.</p><p>I need a <em>personalized</em> system: one that understands my communication style, my approach to coordination, and what needs my attention. This is all shaped by context and years of experience, so it really boils down to context engineering and rather than telling the AI what&#8217;s important to me, I gave it examples of my actual work: my daily standup notes where i naturally highlighted what mattered, my Slack messages where i escalated or summarized for leadership, and program updates i&#8217;ve written as examples of what i prioritize communicating. I put these into a separate agent steering file so that when AI synthesizes summaries and tasks, it will hopefully be less generic and more focused on what I find to be important.</p><p>Different AI tools have similar concepts like Kiro&#8217;s agent steering files and hooks, so this can be implemented within the tool of your choice.</p><h2>What&#8217;s your personal KMS look like?</h2><p>If these tools had existed when I was in college, or when I had more leisure time in my life, they could have enabled a different level of productivity. For busy professionals now, I think we&#8217;re at a point where we need to evolve how we read, write, and process information. The baseline expectation may soon be that everyone has a personalized &#8220;second brain&#8221; assisting how they operate at work. Executives will continue to have human executive assistants but for everyone else, we now have the capability to build our own AI assistants.</p><p>If you&#8217;ve built a personal knowledge management system that actually sticks, especially one using AI for retrieval or synthesis, I&#8217;d love to hear examples of how it works and what your setup looks like. I tried finding examples, but most content skews toward tutorials rather than real-world systems in use. I know everyone&#8217;s brain works differently, but I think seeing how others operate is still useful.</p><p>I&#8217;ll follow up mid-year if this system is still working for me!</p>]]></content:encoded></item><item><title><![CDATA[Before You Hire (or Become) an AI TPM]]></title><description><![CDATA[A Readiness Framework]]></description><link>https://blog.earlyspark.com/p/before-you-hire-or-become-an-ai-tpm</link><guid isPermaLink="false">https://blog.earlyspark.com/p/before-you-hire-or-become-an-ai-tpm</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Mon, 20 Oct 2025 07:24:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ca3bf6e0-d7f1-45bc-92ee-b12342ab5d05_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve seen AI Technical Program Managers (TPMs) and enablement roles proliferate across LinkedIn. Companies are looking to hire &#8220;the AI person&#8221; who will drive adoption and transformation across their organizations. But this kind of role can mean a dozen different things &#8211; some expect one person to handle advocacy, implementation, vendor management, training, metrics, and roadmapping all at once. Others have more focused mandates with clear support structures. Either way, are these companies <em>ready</em> for what they&#8217;re hiring this person to do? Do you know what kind of AI enablement approach actually fits your company&#8217;s current state?</p><p>As a TPM who managed a small team within a Platform org, now an IC embedded on an ML team and a GenAI Champion within my company, I want to offer a framework to help assess organizational readiness before hiring for these roles.</p><h2>Scope, Support, and Sustainability</h2><p>When you look at AI Program or Enablement job descriptions, you&#8217;ll find wide-ranging responsibilities. It&#8217;s common to see roles that expect someone to:</p><ul><li><p>Advocate for AI usage across technical and non-technical teams</p></li><li><p>Identify opportunities and develop AI solutions</p></li><li><p>Manage vendor relationships and negotiations</p></li><li><p>Design and deliver training programs</p></li><li><p>Define and track success metrics</p></li><li><p>Build and execute the organizational AI roadmap</p></li></ul><p>But the key question I&#8217;ve pondered is: what&#8217;s realistic for one person to accomplish, vs what actually needs <em>distributed</em> ownership?</p><p>I don&#8217;t think this is unique to AI enablement. Whenever we have transformation initiatives, like with DEI, there&#8217;s someone hired to &#8220;own&#8221; it all, with overwhelming scope to solve a systemic challenge. That&#8217;s not to say these roles won&#8217;t be successful, but that they need to have proper support so they don&#8217;t burn out because transformation involves more than a single person.</p><p>There are also questions about sustainability of these roles:</p><ol><li><p>What happens after initial adoption stabilizes?</p></li><li><p>How does this role evolve as AI becomes more ubiquitous?</p></li><li><p>Is this a transformation role with a defined end, or an ongoing function?</p></li></ol><p>These answers affect whether the investment makes sense for your organization.</p><h2>The AI Enablement Maturity Framework</h2><p>Rather than thinking about AI enablement as a single role, it helps to think about it through stages of organizational maturity.</p><h3>Stage 0: Awareness</h3><p><strong>What This Might Look Like:</strong></p><p>Your leadership team has seen the headlines about AI and recognizes it&#8217;s important. There&#8217;s general agreement that you should &#8220;do something&#8221; with AI, but you don&#8217;t know what that looks like yet.</p><p><strong>Key Consideration:</strong></p><p>At this stage, you may need strategic clarity before operational execution. The risk is hiring a program manager when what you actually need is strategy development. A consultant or advisor who can help you define goals, assess capabilities, and create a roadmap might be more valuable than a full-time hire who&#8217;ll spend months just trying to get direction and buy-in.</p><p><strong>Are you looking for strategy development or program execution?</strong></p><h3>Stage 1: Organic Experimentation</h3><p><strong>What This Might Look Like:</strong></p><p>People are already experimenting with AI tools and your employees are already trying ChatGPT, Claude, and Copilot in their own time. Ground-level enthusiasm already exists.</p><p>What you&#8217;re probably experiencing instead are questions like: Which tools should we standardize on? Who should have access? How do we handle security/legal/privacy reviews? The biggest blockers aren&#8217;t about desire or ideas; they&#8217;re about technical enablement.</p><p>Instead of hiring a formal AI lead right away, identify your internal champions: the people already passionate and experimenting with AI. Create a space for them without killing curiosity. The company learns more this way than by dropping in an outsider to &#8220;figure it out.&#8221;</p><p><strong>Context Matters:</strong></p><p>The path through this stage looks different depending on your organizational structure. Larger organizations face more bureaucracy. Security reviews, legal approvals, business case requirements &#8211; all of these can stretch the timeline from idea to implementation by months. You might have more resources and specialists, but you&#8217;ll move slower.</p><p>Smaller companies can move faster with less red tape, but they have fewer in-house experts to collaborate with and may be tempted to pile everything onto one person because the team is lean.</p><p>Regardless of size, if you want to integrate AI beyond just using chatbots - connecting to your existing systems via MCP or other integrations - that takes time and technical infrastructure. Focus on <em>enablement infrastructure</em> before focusing on headcount.</p><p><strong>Is a formal AI TPM hire actually needed at this stage? Or would empowering your existing champions work better?</strong></p><h3>Stage 2: Coordinated Pilots</h3><p><strong>What This Might Look Like:</strong></p><p>Multiple teams are now running AI experiments. You&#8217;re starting to see duplication of effort, missed opportunities to share learnings, and a need for coordination. Teams are beginning to integrate AI into internal systems, and some patterns are emerging around what actually works.</p><p>This is where a formal AI TPM hire starts to make sense, but only if certain prerequisites exist.</p><p><strong>What Makes a Hire More Likely to Succeed:</strong></p><p><strong>1. Existing Infrastructure</strong></p><p>You need more than general enthusiasm from leadership. Is there actual budget allocated &#8211; not just &#8220;go figure it out and we&#8217;ll see&#8221;? Have tooling decisions been made or at least narrowed down significantly? Are security and legal frameworks for AI tool usage established, or will this person spend six months just trying to get approval to do anything?</p><p><strong>2. Partnership Network</strong></p><p>Who will this person actually partner with day-to-day?</p><p>Consider your in-house ML and AI experts. Do you have teams working on machine learning for your product (like recommendation systems, monetization algorithms, safety models)? Those people could be invaluable advisors for internal AI initiatives, even if that&#8217;s not their primary job. Are they available and willing to collaborate?</p><p>What about your platform or devops team? Do they have capacity to support integration work, or are they already underwater with business-as-usual requests?</p><p>Is there an L&amp;D or training partner identified who can help with content development and rollout?</p><p>Answering these questions will help you determine if the AI TPM will just spend months trying to find partners and build relationships.</p><p><strong>3. Realistic Scope</strong></p><p>Be explicit about what IS in scope for this role, and equally important, what is NOT.</p><p>This person should be an orchestrator and enabler, not a doer-of-everything. They&#8217;re not going to single-handedly evaluate every vendor, build all the training materials, implement all the integrations, define all the metrics, and drive all the adoption. That&#8217;s not realistic, and expecting it sets everyone up for disappointment.</p><p><strong>What This Role Might Focus On:</strong></p><p>Think about this role as someone who:</p><ul><li><p>Facilitates knowledge sharing across teams running experiments</p></li><li><p>Builds trust and credibility with both skeptics and enthusiasts</p></li><li><p>Removes blockers that are preventing teams from moving forward</p></li><li><p>Supports and amplifies existing champions rather than replacing them</p></li><li><p>Helps establish &#8220;AI-by-design&#8221; thinking across the organization</p></li><li><p>Coordinates efforts to avoid duplication while encouraging experimentation</p></li></ul><p>They don&#8217;t &#8220;own AI&#8221; &#8212; they enable others to use it responsibly and effectively.</p><h3>Stage 3: Scaling Adoption</h3><p><strong>What This Might Look Like:</strong></p><p>AI adoption is working in pockets across your organization. Now you need to systematize it: spread the knowledge, establish patterns, and scale what&#8217;s working without losing momentum.</p><p>At this stage, the risk is actually that your AI TPM becomes a bottleneck if you&#8217;re not careful. If everything flows through one person, you&#8217;ll slow down instead of speeding up.</p><p><strong>One Potential Model: Distributed Champions</strong></p><p>One approach worth considering is a distributed champion model, similar to how some companies approach Security Champions or other specialized roles.</p><p>The structure might look like:</p><ul><li><p>Each org nominates someone to spend ~10% of their time as a local advocate, with term limits to prevent burnout and spread knowledge over time</p></li><li><p>Champions selected based on enthusiasm but through a formal vetting process to ensure accountability and fairness</p></li><li><p>Training program co-developed by the AI TPM and an L&amp;D partner</p></li><li><p>The TPM coordinates training, connects champions, and curates what works.</p></li></ul><p><strong>Champion Responsibilities:</strong></p><p>These champions would be the first point of contact for their team&#8217;s questions, help identify opportunities where AI could help their specific team context, and escalate blockers to the AI TPM for support.</p><p>Importantly, these shouldn&#8217;t just be enthusiastic facilitators; they should have hands-on experience with AI tools and understand general best practices. They need to be practitioners who can actually guide others, not just cheerleaders.</p><p><strong>The AI TPM&#8217;s Role in This Model:</strong></p><p>At this stage, the AI TPM&#8217;s job evolves to:</p><ul><li><p>Supporting and coordinating champions (not managing them - they don&#8217;t report to the TPM)</p></li><li><p>Curating and vetting suggested resources and approaches</p></li><li><p>Connecting champions across orgs so they can share learnings</p></li><li><p>Maintaining momentum and identifying opportunities for scaling</p></li><li><p>Helping measure impact</p></li></ul><p>At this level of maturity, it&#8217;s worth asking: how does this role change as AI becomes standard? Does it morph into general emerging tech coordination? Does it scale down as the distributed model matures? Does it sunset entirely?</p><p>This should be discussed up front, not figured out later.</p><h2>Assessment Questions for Hiring Managers</h2><p>Before you write that job description, take time to assess where you are. Ask yourself:</p><p><strong>1. Role Trajectory</strong></p><p>Is this a transformation role with a defined arc, or an ongoing function? How might this role evolve as AI becomes more standard in your organization? What&#8217;s the career path for this person beyond the initial adoption phase?</p><p><strong>2. Existing Infrastructure</strong></p><p>Do you have executive support that goes beyond general enthusiasm? Is there actual budget allocated for tools, training, and programs? How far along are your tooling decisions &#8211; have you selected vendors, or is evaluating and selecting tools part of this person&#8217;s job too?</p><p><strong>3. Partnership Ecosystem</strong></p><p>Who will this person partner with on a day-to-day basis? Are ML and AI experts in your company available and willing to collaborate? Does your platform or devops team have capacity to support integration work? Is there an L&amp;D partnership available for training development?</p><p>Are there existing AI champions or enthusiasts, or is this person starting from scratch to build a community?</p><p><strong>4. Scope Boundaries</strong></p><p>What specifically is in scope for this role? What is explicitly NOT their responsibility? Are you asking for strategy development or execution of an existing strategy? Are they building AI solutions themselves or enabling others to build?</p><p><strong>5. Organizational Maturity</strong></p><p>What&#8217;s actually happening with AI right now in your company? Have pilots or proof-of-concepts been completed? Where do you honestly fit in the maturity model?</p><p>What&#8217;s your pace and what&#8217;s your appetite for risk versus your tolerance for security and legal approval processes?</p><p>Creating clarity with this framework will help both hiring managers and candidates.</p><h2>What Candidates Should Ask</h2><p>If you&#8217;re interviewing for one of these roles, look for signs of readiness:</p><ul><li><p>Clear scope and evolution path</p></li><li><p>Identified partner teams</p></li><li><p>Existing infrastructure (tools, policies, budget)</p></li><li><p>Awareness that this is change management, not just tool rollout</p></li></ul><p>Red flags:</p><ul><li><p>Vague org context</p></li><li><p>Unrealistic expectations</p></li><li><p>No clear success metrics</p></li><li><p>Strategy and execution blurred together</p></li></ul><p>You can be excellent at this job and still burn out if the foundation isn&#8217;t there. Burn out isn&#8217;t just about being overloaded by the volume of work, but also by not being able to see the results of your work.</p><h2>Final Thought</h2><p>Hiring an AI TPM can accelerate transformation, but only when the organization is ready to support it. Here&#8217;s the TL;DR:</p><p><strong>Support infrastructure and partnerships matter more than individual brilliance.</strong> You can&#8217;t hire one brilliant person and expect them to transform culture alone. They need partners, budget, executive backing, and clear scope.</p><p><strong>Distributed ownership often works better than centralized control.</strong> The most effective transformational adoption I&#8217;ve seen comes from many people owning small pieces, not everything funneling through one person who becomes a bottleneck.</p><p><strong>Honest assessment of maturity helps set realistic expectations.</strong> If you&#8217;re at Stage 1, don&#8217;t hire for Stage 3. If you&#8217;re at Stage 0, maybe don&#8217;t hire at all yet - get clear on strategy first.</p><p>Every organization&#8217;s context is different though &#8211; does this resonate with you or what&#8217;s worked where you&#8217;re at? Does it matter if organizational readiness matches role design?</p><div><hr></div><p>This was also posted to my <a href="https://www.linkedin.com/pulse/before-you-hire-become-ai-tpm-rayana-stanek-cctrc/">LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Burnout of AI Early Adopters in Big Tech]]></title><description><![CDATA[Just my observations.]]></description><link>https://blog.earlyspark.com/p/burnout-of-ai-early-adopters-in-big</link><guid isPermaLink="false">https://blog.earlyspark.com/p/burnout-of-ai-early-adopters-in-big</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Thu, 16 Oct 2025 21:07:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8-k9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4614adfe-58e2-4f52-a0df-ff375e1eb57f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you noticed that every cracked vibe coder at large companies is burnt out? Why is that?</p><ol><li><p>Their coworkers aren&#8217;t bought into the potential of AI at work and are faced with skepticism. For various valid reasons, workflows remain manual and a lot of the processes remain status quo. Here, AI is not a disrupter but a distractor from ensuring we meet deadlines.</p></li><li><p>In contrast to the above, you have people who are mid who are creating more work for those who have to clean up after them, and creating mis-trust of AI tools and hardening skeptics. Those who are well-intentioned are met with the real product &amp; engineering costs of integrating their vibe coded idea into production systems.</p></li><li><p>There&#8217;s red tape when it comes to tooling. They all need to dogfood their company&#8217;s own AI tools first. This creates a fragmented ecosystem, further widening the chasm between those who are early adopters vs those who are just looking to get started.</p></li><li><p>Tooling remains a challenge for non-devs, as does AI education. The ones who have the bandwidth will find a way, but the extra overhead and investment to learn isn&#8217;t rewarded or prioritized. They want more opportunities to experiment.</p></li><li><p>Large companies have a ton of context that they&#8217;re still figuring out what to do with or how to effectively process with AI tools. Internal developer success teams are overburdened by tackling some of the context engineering challenges, as well security, legal, and other teams that do internal reviews. </p></li></ol><p>They might be looking to leave but the reality is, traditional FAANG companies are moving slower (compared to the speed of AI adoption) so you have to either go where they&#8217;re developing frontier models (where everyone is already ahead of the curve) or go to a small-to-mid size company where they&#8217;re starting with AI-first principles while also are not disillusioned by what value it can actually provide.<br><br>How are companies supporting those early adopters so that they don&#8217;t burn out? What lessons can we learn from small-to-mid size companies? i&#8217;m in an AI bubble so maybe i&#8217;m completely off-base, but what do you think? Are these observations prevalent?</p><div><hr></div><p>This post was originally on <a href="https://www.linkedin.com/feed/update/urn:li:activity:7384694059837296641/">LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[I Built an AI Version of Myself]]></title><description><![CDATA[Here's Why, How, and What I Learned]]></description><link>https://blog.earlyspark.com/p/i-built-an-ai-version-of-myself</link><guid isPermaLink="false">https://blog.earlyspark.com/p/i-built-an-ai-version-of-myself</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Sat, 11 Oct 2025 07:30:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/aca3b3aa-839a-4a0e-a68a-99cd01d7f273_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>What if recruiters could talk to an AI me first?</h2><p>Companies use AI to screen candidates all the time &#8212; algorithms parse resumes, automated systems filter people out, and I&#8217;ve read some companies are using chatbots for hiring. And I don&#8217;t blame them because they&#8217;re getting flooded with automated applications. But generally speaking, companies have higher leverage than us and I&#8217;m sure many of them have streamlined the various parts of their recruiting pipeline with automation.</p><p>So I got curious: could I build the reverse? An AI agent that represents me, so hiring managers can get their screening questions answered without either of us spending time on a call? We read about people begrudging automated rejections but two can play this game, right? If companies are using AI to screen people out, can job seekers use AI to screen companies in?</p><p>Interview prep is work. Researching each company, tailoring my stories to their specific needs, preparing for whatever interview format they throw at me (system design? leadership questions? obscure domain knowledge?). It takes time and energy that I&#8217;d rather spend on actual work.</p><p>This was more of a &#8220;can i build this?&#8221; experiment than a &#8220;I desperately need this&#8221; project. I&#8217;m not actively looking for a job because I like Twitch and being a TPM in the Trust &amp; Safety space (but also, I don&#8217;t have the energy at home to try &#8212; I vibe code in the evenings while I&#8217;m half asleep after the toddler goes to bed). Still, I was curious to pursue the idea.</p><h2>The vision</h2><p>The concept is straightforward: a chatbot that knows my work history, technical background, communication style, and career interests. You can ask <a href="https://chat.earlyspark.com/">it</a> questions like &#8220;What is your role at Twitch?&#8221; or &#8220;What kind of projects are you interested in?&#8221; and get answers that actually sound like me.</p><p>I&#8217;m fully transparent about it being AI. I&#8217;m not planning on using it seriously, and it needs way more context than it does right now, but it&#8217;s an interesting idea, right?</p><h2>The technical challenges: RAG is tricky</h2><p>Here was my prompt to both Claude and ChatGPT as I was forming the idea for this project:</p><blockquote><p>i want to create an ai agent that can answer hiring manager&#8217;s questions for me. I&#8217;m not trying to fake an interview, i want to create an ai clone of myself and have people use it to see if I&#8217;m a match for them. how can i build it? I&#8217;m not familiar with the technical setup, i don&#8217;t have experience creating something like this for other people to use but I&#8217;d like to host it myself. doesn&#8217;t need voice</p></blockquote><p>Then I went down a rabbit hole, as one does when you begin a journey like this. It gave me some implementation options:</p><ol><li><p>Massive system prompt</p></li><li><p>RAG (retrieval-augmented generation)</p></li><li><p>Fine-tuning</p></li></ol><p>I had some familiarity with RAG conceptually, but even though I&#8217;ve developed multi-agent systems before, I was still having a hard time wrapping my head around the practical applications of RAG, so I went with that one to give me an opportunity to learn more about it.  It essentially involves chunking information, storing it in a vector database, retrieving relevant chunks for each query, and generating a response.</p><h3>Chunking information is hard</h3><p>The hardest part of building the RAG system was figuring out how to split my content into searchable pieces. If chunks were too small, I&#8217;d lose important context like splitting a behavioral story mid-sentence so the AI couldn&#8217;t understand the full situation. If chunks were too large, the vector search would match too broadly and return irrelevant information. I struggled with this and tried different token sizes and split strategies. The breakthrough came when I realized different content types need different approaches: resumes need to preserve job sections and behavioral stories need to stay intact as complete narratives. I ended up building five specialized chunkers, each optimized for its content category with different token limits (600-1000 tokens) and intelligent boundary detection. The most sophisticated part was the hierarchical chunking system that creates both detailed base chunks (for specific facts) and broader parent chunks (for temporal queries like &#8220;What did you do before Twitch?&#8221;). This multi-granularity approach, combined with semantic boundary detection that preserves context across chunk edges, solved most of the retrieval accuracy problem for me.</p><h3>Retrieval: the 60-25-15 formula</h3><p>Getting the AI to actually return the best answer was a battle with false negatives and false positives. Early on, I relied purely on vector similarity, which meant the AI would confidently return wrong information when semantic embeddings matched but the actual content didn&#8217;t. I solved this with a hybrid scoring system: semantic similarity gets 60% weight (the vector embedding match), category relevance gets 25% weight (an LLM classifies whether the query is asking about resume, projects, or experience), and tag matching gets 15% weight (keyword extraction without LLM overhead). This formula, combined with a tag-weighted search that extracts meaningful keywords from queries (filtering out stop words like &#8220;the&#8221; and &#8220;what&#8221;), dramatically improved accuracy. The latency problem is still real though; each search requires generating a query embedding, running pgvector similarity search, and LLM-based validation when confidence is low, which adds 2-3 seconds per response. I implemented semantic response caching (0.85+ similarity threshold with 7-day TTL) that gives me 60-80% cache hit rates and brings cached responses down to ~100ms, but cold queries still feel slow.</p><h3>Making the AI sound like me</h3><p>I spent the most time tweaking the system prompt to capture my communication style. Prompt engineering alone wasn&#8217;t enough, I built a dual-purpose content processing system where certain content gets chunked twice: once for factual information retrieval and once for communication style analysis. When I tag content with <code>communication-style-source</code>, the chunker extracts both what I said (information chunks) and how I said it (style pattern chunks). These style chunks capture tone markers, helpfulness patterns, technical depth, and response structure, which get embedded into the vector database alongside regular content.</p><p>Also, I had to figure out how to deal with questions I don&#8217;t want it to answer. I implemented a three-tier validation system where the AI first searches the knowledge base, then validates if the retrieved content actually answers the question (not just semantically similar), and finally checks if the query is even employment-related. For sensitive topics like salary or questions requiring nuanced human judgment, the AI redirects to my LinkedIn with a natural suggestion to connect. For completely off-topic questions (like asking about random news or unrelated topics), it politely declines and steers back to professional discussion. The result not only matches my communication style but also knows its boundaries, just like I would in a real conversation.</p><h3>Questioning the approach entirely</h3><p>I&#8217;ve shipped an MVP but I&#8217;m not even sure RAG was the right choice.</p><p>RAG makes sense for huge knowledge bases where you can&#8217;t fit everything in context. But for a personal chatbot, my data isn&#8217;t that big. Modern LLMs have context windows large enough to potentially handle everything at once.</p><p>The retrieval step adds latency. The chunking is finicky. And the LLM has to synthesize the retrieved chunks anyway, which takes processing power.</p><p>Maybe I should&#8217;ve just dumped everything in one prompt and let the LLM figure it out. That&#8217;s something I&#8217;m planning to test next.</p><h2>Try it out</h2><p>I don&#8217;t know if AI representatives will become standard in hiring. It&#8217;s not the norm yet, and maybe it never will be. But this was my attempt at evening out a system that feels uneven. More importantly for me, I learned something new and it was fun to try out.</p><p>Here is the MVP: <a href="https://chat.earlyspark.com/">https://chat.earlyspark.com/</a> &#8212; like I said, it doesn&#8217;t have all of my professional background in it, but it has a general sense for who I am and what I&#8217;ve done and how I think about things.</p><h2>If you want to build your own</h2><p>My <a href="https://github.com/earlyspark/ai-candidate">github repo</a> is public if you want to see how I built this &#8212; a few recommendations if you&#8217;re starting from scratch:</p><ul><li><p>Look for existing tools first. There are probably pre-built chatbot solutions that would work. I built mine from scratch because I wanted to understand how it works, but you might not need to.</p></li><li><p>Just start building. I could&#8217;ve taken courses on RAG systems, but hitting the limitations firsthand taught me more than tutorials would have. </p></li><li><p>Spend time on communication style. that&#8217;s what makes it sound like you instead of generic AI.</p></li></ul><p>This was a fun experiment. whether it&#8217;s useful in practice remains to be seen. But at minimum, I learned a lot about RAG systems and their tradeoffs. Now that I have a better hands-on understanding, I&#8217;d be much more patient to sit through a course and read up on the theory behind creating an efficient system. Let me know your recommendations and thoughts!</p>]]></content:encoded></item><item><title><![CDATA[Just Shipped It]]></title><description><![CDATA[What I Learned From Vibe-Coding An Actual Project]]></description><link>https://blog.earlyspark.com/p/just-shipped-it</link><guid isPermaLink="false">https://blog.earlyspark.com/p/just-shipped-it</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Tue, 24 Jun 2025 06:28:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bf04a136-cd93-4deb-8569-1d3728812044_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s not polished. It&#8217;s not done. But I&#8217;m not waiting for perfect. Just shipping. I vibe coded <strong><a href="http://koreanbabymeals.com/">koreanbabymeals.com</a></strong> in 3 nights using Claude Code. No IDE. Just me trying to solve a real problem I have as a mom. I&#8217;ll take that over another AI thought piece.<br><br>The idea came from trying to figure out what to feed my baby, now that she can eat almost anything. I spend SO much time planning, researching, shopping, chopping... I needed a recipe site where I can search by the ingredients I already have, prioritize meals that work with a food processor and freezes well, and filter by how messy it is to eat because sometimes I just don&#8217;t have the energy to clean up the saucy aftermath. I wanted more variety and a way to bring in Korean ingredients. I&#8217;m not especially in tune with my Korean heritage, but I want my daughter to grow up knowing it and feeling proud of it. <br><br>Reading through the AI hype made me wonder what&#8217;s actually being built, and whether any of it helps regular people, not just the tech crowd. I&#8217;ve contributed to the noise myself (&#129760;), so I decided to pick a real-life problem and vibe code the solution into existence.<br><br>My personal Macbook is old enough that installing Node took forever. Setup friction was high &#8212; same problem I run into at work. While that was installing though, I collaborated with Claude by giving it my vision and working together on the initial prompt to give to Claude Code. I still remembered enough frontend to care about how things are structured. Mobile-responsive, no JavaScript errors, and decent SEO. I added a cookie consent banner (for Google Analytics) because I&#8217;ve done compliance work before and didn&#8217;t want to skip the basics. <br><br>I had to relearn a lot: next.js, how to work with my hosting provider&#8217;s constraints, how to automate as much of the deployment as I can. Back when I was coding full-time, deployment meant FTP and Jenkins. Now I needed to figure out how to integrate GitHub into my hosting flow &#8212; and how to work around memory constraints on my plan.<br><br>I&#8217;ve been seeing a lot of people theorize about AI and tooling. But I wanted to see what I could actually ship, to actually make something with the tools we keep talking about. <br><br>If you&#8217;re building something too or curious how I approached this, I&#8217;m happy to share what I learned. Not as an expert &#8212; just to trade notes.<br><br>You can find the repo here: <a href="https://github.com/earlyspark/korean-baby-meals">https://github.com/earlyspark/korean-baby-meals</a> &#8212; I&#8217;ll be adding more recipes over time since I plan to keep using the site myself, but it&#8217;s mainly a learning project for now. Let me know what you think!</p><div><hr></div><p>This post was originally <a href="https://www.linkedin.com/posts/rayanastanek_korean-baby-meals-healthy-korean-recipes-activity-7342928239348129795-aGBi/">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Keeping AI on a Leash]]></title><description><![CDATA[My Takeaways From Andrej&#8217;s Talk]]></description><link>https://blog.earlyspark.com/p/keeping-ai-on-a-leash</link><guid isPermaLink="false">https://blog.earlyspark.com/p/keeping-ai-on-a-leash</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Thu, 19 Jun 2025 06:15:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/LCEmiRjPEtQ" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I watched <strong><a href="https://karpathy.bearblog.dev/blog/">Andrej</a></strong>&#8216;s whole talk so you don&#8217;t have to. The key point that resonated with me was <a href="https://youtu.be/LCEmiRjPEtQ?si=FDYDFYjoYiRsRjYg&amp;t=1375">@ 22:55</a>: 1) &#8220;We have to keep AI on a leash&#8221; to increase the probability of successful verification. 2) Make the verification of what AI produces easy and fast.<br><br>There&#8217;s hype around agentic AI (which I <a href="https://blog.earlyspark.com/p/rise-of-agentic-ai">posted</a> about a few months ago) but it doesn&#8217;t even matter if you&#8217;re building a crazy multi-agentic system if YOU are the bottleneck for evaluation. If you&#8217;re trying to get things done, letting an overreacting agent loose on your project is not productive and you need to keep it on a leash. He uses <strong><a href="https://atharvaraykar.com/">Atharva</a></strong>&#8216;s <a href="https://blog.nilenso.com/blog/2025/05/29/ai-assisted-coding/">https://blog.nilenso.com/blog/2025/05/29/ai-assisted-coding/</a> as an example of how to do that, through the practice of metaprompting. When you keep AI on a leash, then auditing its output for accuracy and consistency is faster.<br><br>Later on <a href="https://youtu.be/LCEmiRjPEtQ?si=d3HbVVWuDqtnTXkQ&amp;t=1897">@ 31:37</a> he shows off an app that he vibe-coded that allows you to take a photo of a menu (with mainly text) where it generates images of the menu items. Brilliant! <strong><a href="http://menugen.app/">menugen.app</a></strong><br><br>The last chapter has practical tips on making docs and websites legible to LLMs and miscellaneous tools for that.<br><br>If you did watch the whole video, I&#8217;m curious what you found interesting about it. Here&#8217;s the link: </p><div id="youtube2-LCEmiRjPEtQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;LCEmiRjPEtQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/LCEmiRjPEtQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This post was originally <a href="https://www.linkedin.com/posts/rayanastanek_andrej-karpathy-software-is-changing-again-activity-7341328086379991041-sSFA/">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Building AI-First Organizations]]></title><description><![CDATA[Notes From OpenAI&#8217;s Playbook]]></description><link>https://blog.earlyspark.com/p/building-ai-first-organizations</link><guid isPermaLink="false">https://blog.earlyspark.com/p/building-ai-first-organizations</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Tue, 17 Jun 2025 06:03:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/nwSxlrSbVqg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Just watched this video and wanted to share some key insights that stood out to me:</p><ul><li><p>The smartest models won&#8217;t matter if they lack your business context. The real value comes from context engineering - connecting your data and knowledge systems to LLMs through MCP and similar mechanisms.</p></li><li><p>Their approach to content retrieval considers recency and authorship seniority (social graph) to ensure the most relevant information answers your queries.</p></li><li><p>Interesting to learn that OpenAI does quarterly planning too, but they emphasize direct customer conversations and go-to-market team input when deciding what to build.</p></li><li><p>I appreciated hearing that the &#8220;canvas&#8221; product originated from an individual contributor who pitched the idea and rallied others around it. I&#8217;m sure there was a lot more involved in taking initiative and driving it forward from within the organization, but it was still good to hear.</p></li><li><p>As someone in Trust &amp; Safety, I was glad to hear their perspective that moving quickly doesn&#8217;t mean cutting corners on safety. They&#8217;re focused on shipping quickly AND responsibly - these aren&#8217;t competing priorities but both contribute to maintaining high quality standards. Does that always happen in practice? I&#8217;d hope so.</p></li><li><p><a href="https://youtu.be/nwSxlrSbVqg?si=4wLQWiNi7rZRxxTP&amp;t=1840">At 30:40</a>, they discuss the emergence of internal &#8220;AI champions&#8221; within companies. These aren&#8217;t people pushing adoption for adoption&#8217;s sake, but individuals driving bottom-up cultural change to help others develop AI and tool fluency. The focus is on augmenting and extending capabilities, not replacing them.</p></li><li><p>The future of work seems to be heading toward building shareable custom GPTs and AI workflows that tap into your company&#8217;s institutional knowledge.</p></li></ul><p>Check out what <strong><a href="https://substack.com/@petergyang">Peter</a></strong>&#8216;s video on what it looks like for a company to leverage AI-first: </p><div id="youtube2-nwSxlrSbVqg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;nwSxlrSbVqg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/nwSxlrSbVqg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This post was originally <a href="https://www.linkedin.com/posts/rayanastanek_how-openais-head-of-business-products-uses-activity-7340363773922156546-D9mY/">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Building Multi-Agent Systems]]></title><description><![CDATA[What I Learned From Claude&#8217;s Research]]></description><link>https://blog.earlyspark.com/p/building-multi-agent-systems</link><guid isPermaLink="false">https://blog.earlyspark.com/p/building-multi-agent-systems</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Sat, 14 Jun 2025 05:43:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/305a8060-49cb-4256-a760-11ec54904d4f_1897x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Claude&#8217;s Research feature demonstrates the power of multi-agent collaboration in complex problem-solving. This chart shows the most popular use cases where multiple agents work together to decompose challenging research tasks. The lessons Anthropic shared from <a href="https://www.anthropic.com/engineering/multi-agent-research-system">this implementation</a> provide timely insights I&#8217;m planning to apply in <a href="https://blog.earlyspark.com/p/beyond-prompts-scaling-your-companys">my own multi-agent system development</a>, that I&#8217;d overlooked before:</p><ul><li><p>Test your subagents and watch them work step-by-step. Are they using the incorrect tools, have you specified an output format, and clear task boundaries? Without detailed task descriptions, agents might duplicate work, leave gaps, or fail to find necessary information.</p></li><li><p>Agents struggle to judge appropriate effort for different tasks so help them allocate resources efficiently by specifying breadth and depth to scale effort to query complexity.</p></li><li><p>A single LLM call with a single prompt outputting scores from 0.0-1.0 and a pass-fail grade was the most effective LLM-as-judge evaluation across a rubric: factual accuracy, completeness, source quality, tool efficiency, among others.</p></li><li><p>Build in a way to resume when errors occur because restarting from the beginning is expensive and frustrating.</p></li><li><p>For memory management due to limited context windows, implement artifact systems where specialized agents can create outputs that persist independently rather than requiring subagents to communicate everything through the lead agent. Make subagents call tools to store their work in external systems, then pass lightweight references back to the coordinator.</p></li></ul><p>I still feel a bit stuck on what to do when subagents disagree but I suppose that&#8217;s for the lead agent to remediate. &#128517;</p><div><hr></div><p>This post was originally <a href="https://www.linkedin.com/posts/rayanastanek_multiagentsystems-ai-machinelearning-activity-7339539122513973248-ndb6">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Building With Claude Code]]></title><description><![CDATA[Lessons From Anthropic&#8217;s Internal Playbook]]></description><link>https://blog.earlyspark.com/p/building-with-claude-code</link><guid isPermaLink="false">https://blog.earlyspark.com/p/building-with-claude-code</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Tue, 10 Jun 2025 05:31:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8-k9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4614adfe-58e2-4f52-a0df-ff375e1eb57f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Anthropic published <a href="https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135ad8871e7658.pdf">how their teams use Claude Code</a>, ranging from engineering, legal, to product development and design. Here are some high-level tips that might resonate with you as you&#8217;re considering AI adoption at your workplace:</p><ul><li><p>Get proper setup help from engineers since the technical onboarding can be challenging for non-developers.</p></li><li><p> Overcome the urge to hide &#8220;toy&#8221; projects or unfinished work - sharing prototypes helps others see possibilities and sparks innovation across departments that don&#8217;t typically interact. This also helps spread best practices and knowledge. Hold sessions where team members can demonstrate their Claude Code workflows to each other. </p></li><li><p>Rather than expecting Claude to solve problems immediately, approach it as a collaborator you iterate with. For example, while supervising, don&#8217;t hesitate to stop Claude and ask &#8220;why are you doing this? Try something simpler.&#8221;</p></li><li><p>The better you document your workflows, tools, and expectations in <strong><a href="http://claude.md/">Claude.md</a></strong> files, the better Claude Code performs.</p></li><li><p>Learn to distinguish between tasks that work well asynchronously (peripheral features, prototyping) versus those needing synchronous supervision (core business logic, critical fixes).</p></li><li><p>Regularly commit your work as Claude makes changes so you can easily roll back when experiments don&#8217;t work out.</p></li><li><p>Instead of trying to handle everything in one prompt or workflow, create separate agents for specific tasks (like their headline agent vs. description agent). I&#8217;ve been working on this myself (and mention this <a href="https://blog.earlyspark.com/p/beyond-prompts-scaling-your-companys">here</a>.</p></li></ul><p>If you&#8217;ve already been working with AI tools, which tips are validating your own findings? Or which are ones that are newly insightful to you? I&#8217;m curious to know!</p><div><hr></div><p>This post was originally <a href="https://www.linkedin.com/posts/rayanastanek_anthropic-published-how-their-teams-use-claude-activity-7337874271827881984-8TuC">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Scaling Your Company's Workflows with LLMs]]></title><description><![CDATA[You don&#8217;t need to be an engineer]]></description><link>https://blog.earlyspark.com/p/beyond-prompts-scaling-your-companys</link><guid isPermaLink="false">https://blog.earlyspark.com/p/beyond-prompts-scaling-your-companys</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Mon, 26 May 2025 06:20:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EzgX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EzgX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EzgX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!EzgX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!EzgX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!EzgX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EzgX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EzgX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!EzgX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!EzgX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!EzgX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f43e02c-0e08-4d8e-8211-1ad8e6ea4d4f_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">How I think TPMs can be used for high-leverage work in AI systems.</figcaption></figure></div><p>I&#8217;ve been iterating on a project that started as a <strong><a href="https://www.linkedin.com/posts/rayanastanek_hackweek-llms-activity-7274126132675796992-71IP">Hackweek win</a></strong> and has since evolved into something bigger: a domain-specific &#8220;review agent&#8221; powered by an LLM.</p><p>The idea? Could a tailored AI agent help with first-pass reviews of complex product specs? Manual domain reviews are high-effort, doesn&#8217;t scale well, and dependent on subject matter experts with deep institutional knowledge. Could an LLM catch common issues, raise thoughtful questions, and reduce the initial cognitive load on humans?</p><p>I think the answer is yes. With the right context, tools, and framing &#8211; it can help. It doesn&#8217;t replace human judgment, but offloads the rote parts of review so people can spend more time on the hard, nuanced edge-case thinking.</p><p>But making that possible as a Technical Program Manager meant pushing beyond my business-as-usual scope, exploring new tools, and surfacing opportunities to improve how we build with AI.</p><h3><strong>What I Built</strong></h3><p>I went through a few different AI tools, and with my latest prototype I created:</p><ul><li><p>A custom &#8220;review agent&#8221; with a tailored system prompt and rules</p></li><li><p>A local memory that stores example reviews and supports feedback-based improvement</p></li><li><p>An architecture that supports multiple agent perspectives (e.g., one for engineering, one as a domain expert, one for business impact) so they can collaborate on assessments</p></li></ul><p>The agent now parses product specs, flags potential issues based on historical patterns, and suggests areas for deeper exploration.</p><p>I&#8217;ve trialed it with human reviewers and evaluated against past reviews. Initial feedback has been encouraging &#8211; people said it helped them get oriented faster and allowed them to focus their effort where it mattered most. But don&#8217;t clap yet, there&#8217;s still much to be done and I think that side of working with AI tools isn&#8217;t talked about enough. We all expect it will save us time and money, but there is an upfront cost to learning, experimenting, and applying that knowledge effectively.</p><h3><strong>Why Not Just Use ChatGPT?</strong></h3><p>I&#8217;ve seen demos of LLMs reviewing product specs by simply pasting in a prompt and asking, &#8220;What are the risks?&#8221; That&#8217;s a fine starting point &#8211; but in real operational settings, that alone doesn&#8217;t cut it. Here&#8217;s why I needed a more extensible setup:</p><ul><li><p><strong>Specs are long and inconsistent</strong>: not all LLMs handle 10+ page documents well without structured guidance</p></li><li><p><strong>Surface-level suggestions aren&#8217;t helpful</strong>: &#8220;Be careful of bots&#8221; isn&#8217;t actionable; we need contextualized, system-aware recommendations</p></li><li><p><strong>Institutional knowledge matters</strong>: knowing what we&#8217;ve suggested in past reviews is helpful, and that history lives in tickets, comments, and scattered documents &#8211; not in a single static prompt</p></li></ul><p>That&#8217;s why I used tools that allow:</p><ul><li><p>Persistent memory (a local library of real-world examples)</p></li><li><p>Customized rules and system prompts</p></li><li><p>Integration with MCP for internal data context</p></li></ul><p><strong>It wasn&#8217;t about proving that an LLM </strong><em><strong>could</strong></em><strong> read a doc. It was about helping the LLM reason like someone who&#8217;s done dozens of reviews before.</strong></p><h3><strong>Why TPMs Are Uniquely Positioned</strong></h3><p>I&#8217;m not an engineer (anymore). But I&#8217;m technical enough to navigate internal infra, and product-minded enough to design useful workflows.</p><p>This is why TPMs are in a uniquely powerful position right now:</p><ul><li><p>We sit at the intersection of processes, people, and tools</p></li><li><p>We understand how work <em>actually</em> gets done &#8211; not just how it&#8217;s specced</p></li><li><p>We have just enough technical fluency to stitch things together</p></li></ul><p>This combination of breadth, depth, and just enough tech, is what makes TPMs well-suited to drive value from internal AI tools. We know which problems matter. We know who they affect. TPMs also excel at bridging the gap between the technical and non-technical and are often described as the glue. But with LLMs, we can be more than glue &#8211; we can be <em>builders and enablers</em>.</p><h3><strong>What&#8217;s Been Challenging</strong></h3><p>While I&#8217;ve made good progress, here&#8217;s what I&#8217;ve had to work around:</p><ul><li><p>To access internal tools and MCP servers, I had to follow engineering onboarding guides to set up my machine. As someone who hasn&#8217;t done any development work in years, this was a hurdle. For someone without an engineering background, it&#8217;d be even harder.</p></li><li><p>The prototype lives locally, which makes it hard to share &#8211; especially with non-technical teammates. I can easily put my artifacts in a repo, but the friction to onboard to new and developing internal AI tools that meets this use case is high for non-engineers.</p></li><li><p>Gaining visibility and buy-in took time. But the tech is moving quickly and I&#8217;ve fortunately found allies along the way.</p></li></ul><p>I&#8217;ve had encouraging conversations with folks who are thinking seriously about enablement, tooling equity, and scalable access. Still, this experience also highlights a broader pattern I think many companies will encounter as they integrate AI deeper into how we work.</p><h3><strong>What Companies Should Be Thinking About</strong></h3><p>Here are some questions I think are worth reflecting on:</p><ul><li><p>Have we made AI tooling accessible beyond engineering teams?</p></li><li><p>Have we educated people on what internal tools like MCP actually enable?</p></li><li><p>Are AI workflows shippable and shareable?</p></li><li><p>Are TPMs, designers, analysts, and reviewers empowered to build?</p></li></ul><p>These are not critiques &#8211; they&#8217;re opportunities to scale responsibly. If you have TPMs who understand ML and have deep domain knowledge, lean into them. The fastest path to AI leverage may not come from AI specialists &#8211; it might come from your cross-functional builders who already know the systems.</p><h3><strong>What Comes Next</strong></h3><p>I&#8217;m continuing to refine the review agent, layering in more domain perspectives, expanding its capabilities, and seeking ways to make it more accessible.</p><p>This is where companies are heading: moving from chatbot novelty to operational leverage. <strong>Prompt design is not enough; </strong><em><strong>domain expertise</strong></em><strong> is what turns a prompt into a high-leverage system.</strong> These kinds of internal agents will become part of how we scale expert thinking across fast-moving product orgs.</p><p>If you&#8217;re working on similar problems &#8211; or wondering how to create effective AI systems &#8211; I&#8217;d love to connect.</p><p>This kind of work doesn&#8217;t just come from engineers anymore. And the sooner we realize that, the faster we&#8217;ll all move. Non-coders can build systems. With the right tools, prototyping agentic workflows is within reach for anyone willing to experiment.</p><div><hr></div><p>This post was originally <a href="https://www.linkedin.com/pulse/beyond-prompts-scaling-your-companys-workflows-llms-rayana-stanek-gqeyc/">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Have you become an AI-critic yet?]]></title><description><![CDATA[Where do you stand when it comes to the outlook on AI tools?]]></description><link>https://blog.earlyspark.com/p/have-you-become-an-ai-critic-yet</link><guid isPermaLink="false">https://blog.earlyspark.com/p/have-you-become-an-ai-critic-yet</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Fri, 02 May 2025 06:11:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!AhcZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AhcZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AhcZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!AhcZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!AhcZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!AhcZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AhcZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AhcZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!AhcZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!AhcZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!AhcZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d38b43-6347-400a-99bd-51f0f8cf0542_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">people with various emotions. you can tell it&#8217;s AI-generated because there&#8217;s a random arm sticking out.</figcaption></figure></div><p>With every wave of hype, critics aren&#8217;t far behind &#8212; and AI is no exception. I&#8217;ve realized I embody every type along the tech belief spectrum below. Does that make me a hypocrite for using AI? I&#8217;m not sure.</p><ul><li><p><strong>Technophile</strong><em>:</em> Loves and embraces tech; excited by innovation. <em>Attitude?</em> Enthusiastic, positive. &#8220;Tech makes life better and cooler!&#8221;</p></li><li><p><strong>Techno-optimist</strong><em>:</em> Believes tech can solve major global problems. <em>Attitude?</em> Hopeful, sometimes idealistic. &#8220;Tech is the key to human progress.&#8221;</p></li><li><p><strong>Techno-pragmatist</strong><em>:</em> Sees tech as useful <em>if</em> applied wisely and ethically. <em>Attitude?</em> Cautious, balanced. &#8220;Tech helps&#8212;when thoughtfully managed.&#8221;</p></li><li><p><strong>Techno-skeptic</strong><em>:</em> Distrusts hype; concerned about harm or overreach. <em>Attitude?</em> Wary, critical. &#8220;Not all tech is good&#8212;proceed with caution.&#8221;</p></li><li><p><strong>Techno-critic</strong><em>:</em> Opposes much of modern tech as dehumanizing or exploitative. <em>Attitude?</em> Resistant, sometimes radical. &#8220;We were better off before all this.&#8221;</p></li></ul><h3><strong>Me as a technophile, the eternal optimist</strong></h3><p>Like many of you, I&#8217;ve been diving deep into AI tools now that they&#8217;re more accessible than ever. I&#8217;ve always imagined a future where tech would make my life easier and used to joke about making a Slackbot clone of myself that could pull context, ask questions, and reply in my likeness. That reality isn&#8217;t far off, as long as I&#8217;ve fed an LLM enough data. I could outsource all the repetitive, mundane tasks to it, right? As a TPM, I could save so much time writing narratives in my voice&#8230; also, let&#8217;s create MCP servers for all the things! In fact, at this point, I would probably think twice about taking a role where they didn&#8217;t already have the infrastructure in place such that I could use them on a day-to-day basis to help with my job. Studio Ghibli me away, there are worse things in this world that is happening.</p><h3><strong>The realist</strong></h3><p>But&#8230; are these tools <em>actually</em> effective, compared to the investments we are putting in? How much energy and resources have we wasted for marginal gains? <strong><a href="https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117">How much of the ocean have we burned</a></strong>? The speed of adoption is outpacing education and regulation. It&#8217;s like the recycling program &#8211; where the burden is generally pushed down to the consumers instead of solving it at scale at the manufacturing level. My stance on whether the use of AI is a net benefit does not outweigh my need for a paycheck and healthcare. If food is only being made available in non-recyclable containers, then yes I would still buy it to survive. That&#8217;s why it&#8217;s important for the companies creating this tech to think about and create mitigations for these risks that they are creating.</p><h3><strong>Throw the internet into the fire</strong></h3><p>Maybe this comes from working in Trust &amp; Safety, but there are days where I want to unplug the internet entirely, especially for the sake of protecting children and vulnerable communities. AI tools aren&#8217;t just trained on &#8220;stolen&#8221; work, they&#8217;re stripping away human connection at scale. Instead of asking a friend a question, now you just Google it. Instead of building relationships with other humans, we now have AI companions. If you resist these tools at work, you may risk becoming a pariah. As <strong><a href="https://techcrunch.com/2025/04/30/duolingo-launches-148-courses-created-with-ai-after-sharing-plans-to-replace-contractors-with-ai/">companies become AI-first</a></strong> and there is a <strong><a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work">push for AI tool adoption</a></strong>, I think AI skeptics will become more prevalent, even if we started out starry-eyed.</p><p>So which type do you relate to the most on the spectrum, and why? Whether you&#8217;re still in the honeymoon stage with AI tools or want a divorce, I think it&#8217;s important to try to understand each other without aiming to shut down the conversation. So yeah, maybe I didn&#8217;t need to generate an <a href="https://www.linkedin.com/posts/rayanastanek_despite-the-chronic-sleep-deprivation-that-activity-7317376636083965952-qkUq?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACucZ4B3unIuXvZHR5Fi4TZN-ArlxuNuEE">AI action figure</a>&#8230; but it&#8217;s cute.</p><div><hr></div><p>This post was originally <a href="https://www.linkedin.com/pulse/have-you-become-ai-critic-yet-rayana-stanek-dt2lc/">on LinkedIn</a>.</p>]]></content:encoded></item><item><title><![CDATA[Beyond Chatbots: The Rise of Agentic AI]]></title><description><![CDATA[Here's my perspective on where I think AI tools are heading]]></description><link>https://blog.earlyspark.com/p/rise-of-agentic-ai</link><guid isPermaLink="false">https://blog.earlyspark.com/p/rise-of-agentic-ai</guid><dc:creator><![CDATA[RayAna]]></dc:creator><pubDate>Tue, 18 Mar 2025 06:51:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dXvW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dXvW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dXvW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 424w, https://substackcdn.com/image/fetch/$s_!dXvW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 848w, https://substackcdn.com/image/fetch/$s_!dXvW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 1272w, https://substackcdn.com/image/fetch/$s_!dXvW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dXvW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png" width="1237" height="397" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:397,&quot;width&quot;:1237,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:60888,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dXvW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 424w, https://substackcdn.com/image/fetch/$s_!dXvW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 848w, https://substackcdn.com/image/fetch/$s_!dXvW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 1272w, https://substackcdn.com/image/fetch/$s_!dXvW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c800ff-fd0c-44cc-9c6b-dee7a658701d_1237x397.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Interest in &#8220;agentic ai&#8221; over the past 12 months, according to Google Trends</figcaption></figure></div><p>If you haven&#8217;t already heard of &#8220;agentic AI,&#8221; don&#8217;t worry &#8212; you&#8217;re not too far behind the AI hype train. The term has gained traction over the past few months, and according to <strong><a href="https://trends.google.com/trends/explore?q=agentic%20ai#TIMESERIES">Google Trends</a></strong>, interest in &#8220;agentic AI&#8221; has spiked since early 2025 worldwide. In simple terms, agentic AI refers to AI systems that can operate with greater autonomy, making decisions and executing tasks with minimal human intervention. While still evolving, these systems are a natural progression from today&#8217;s AI tools, pushing us toward more intelligent automation.</p><p>I won&#8217;t attempt to define it further &#8212; there are already plenty of authoritative sources that do that better than I can. Instead, I want to offer my perspective on where this is heading.</p><p>I&#8217;ve worked in tech across industries ranging from law libraries to ecommerce to gaming. I have foundational knowledge in predictive analytics from UCI, basic programming skills, and I currently work as a TPM for an ML team. While I&#8217;m conflating generative AI, AI tools, and agentic AI here (since they aren&#8217;t the same thing), one enables the other, creating a cascading effect.</p><p>For those of us with years of experience, there&#8217;s a tradeoff: we&#8217;re accustomed to doing things a certain way, making it harder to rethink work efficiencies. Meanwhile, younger professionals may have used ChatGPT and other LLMs as students in recent years since these tools became mainstream &#8212; and they now expect to continue using them at work. You no longer need to be an expert in applied/data science to use AI effectively. Across industries, there&#8217;s growing momentum to embed AI into internal processes &#8212; whether that&#8217;s using AI-powered coding assistants for rapid prototyping, automating meeting summaries, or deriving project statuses from multiple workstreams.</p><h3><strong>What Does This Mean for the Future?</strong></h3><ol><li><p><strong>AI Will Separate Those Who Adapt from Those Who Don&#8217;t. </strong>If you aren&#8217;t leveraging AI in your work, you risk falling behind. Those who do will 10x their output... eventually.</p></li><li><p><strong>The Real Value Lies in Those Who Can Effectively Direct AI. </strong>Job security won&#8217;t come from simply using AI &#8212; it will come from knowing how to get the best results from it. Not all prompts are created equal, and understanding prompt engineering and optimizing outputs will be key to increasing productivity.</p></li><li><p><strong>The Hype Will Peak, Then Settle into What Actually Works. </strong>This tech wave will bring a flood of adoption and vibe coders, followed by a reality check. The market will weed out ineffective applications, leaving only the most impactful uses of AI.</p></li><li><p><strong>AI Will Act as an Extension of Ourselves. </strong>I&#8217;ll finally be able to &#8220;clone&#8221; myself &#8212; at least in the sense that an AI agent could review documents with my tone, highlight risks, and, of course, ask that evergreen TPM question: <em>&#8220;By when?&#8221; </em>But what I really want is for agentic AI to manifest in physical form and do the laundry, load the dishwasher, and cook dinner. I&#8217;ve been promised that multimodality will get us there, so we&#8217;ll see. &#128579;</p></li><li><p><strong>Data Quality Will Matter More Than Ever. </strong>AI is only as good as the data it&#8217;s trained on. Clean, structured documentation will be a major advantage in generating reliable AI-driven insights. While LLMs are improving at deciphering raw, unstructured notes, they aren&#8217;t perfect yet. For coding, AI can help generate documentation.</p></li><li><p><strong>With Every Powerful Tool Comes Risks. </strong>AI&#8217;s accessibility means both good and bad applications will emerge. But that is another post for another day. Even for beneficial applications, guardrails will be necessary to prevent unintended behavior.</p></li></ol><h3><strong>What Should You Do Next?</strong></h3><p>The AI landscape is shifting fast. If you want to stay ahead, here&#8217;s where to start:</p><p>&#9989; <strong>Learn prompt engineering </strong>&#8212; understanding how to structure prompts effectively will determine how much value you can extract from AI. This <strong><a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview">overview from Anthropic</a></strong> is where I started, fwiw.</p><p>&#9989; <strong>Explore how agentic AI can boost your internal productivity </strong>&#8212; what workflows can you automate or enhance?</p><p>&#9989; <strong>Rethink your approach to work </strong>&#8212; AI isn&#8217;t just a tool; it&#8217;s a shift in how we operate. The better you adapt, the more valuable you become.</p><p>What do you think? How are you using AI in your workflows?</p><div><hr></div><p>This post was originally <a href="https://www.linkedin.com/pulse/beyond-chatbots-rise-agentic-ai-rayana-stanek-4vtec/">on LinkedIn</a>.</p>]]></content:encoded></item></channel></rss>