<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[A Blog about Software Development]]></title><description><![CDATA[A Blog about Software Development]]></description><link>https://mahdix.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 15:09:29 GMT</lastBuildDate><atom:link href="https://mahdix.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Stop Treating AI Like a Coworker]]></title><description><![CDATA[It’s an Exoskeleton, and That Changes Everything
Most people are thinking about AI the wrong way.
They imagine it as a coworker:Something you assign tasks to… wait for… and hope it delivers.
Sounds re]]></description><link>https://mahdix.com/stop-treating-ai-like-a-coworker</link><guid isPermaLink="true">https://mahdix.com/stop-treating-ai-like-a-coworker</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Tue, 17 Mar 2026 11:08:09 GMT</pubDate><content:encoded><![CDATA[<h2>It’s an Exoskeleton, and That Changes Everything</h2>
<p>Most people are thinking about AI the wrong way.</p>
<p>They imagine it as a <strong>coworker</strong>:<br />Something you assign tasks to… wait for… and hope it delivers.</p>
<p>Sounds reasonable.<br />But it’s also why so many teams are disappointed.</p>
<p>There’s a better mental model:</p>
<p>👉 <strong>AI is not your coworker. It’s your exoskeleton.</strong></p>
<p>And once you see it this way, everything clicks.</p>
<hr />
<h1>What an Exoskeleton Actually Does</h1>
<p>Think about a physical exoskeleton.</p>
<p>It doesn’t replace the worker.<br />It doesn’t think for them.<br />It doesn’t act independently.</p>
<p>Instead, it:</p>
<ul>
<li><p>Makes them stronger</p>
</li>
<li><p>Reduces fatigue</p>
</li>
<li><p>Helps at specific points of strain</p>
</li>
<li><p>Extends what they’re already capable of</p>
</li>
</ul>
<p>Factories, hospitals, and the military use exoskeletons for one reason:<br /><strong>they amplify humans instead of replacing them</strong></p>
<p>That’s exactly how AI works at its best.</p>
<hr />
<h1>The Big Mistake: Treating AI Like an Employee</h1>
<p>When you treat AI like a coworker, you expect:</p>
<ul>
<li><p>End-to-end solutions</p>
</li>
<li><p>Independent thinking</p>
</li>
<li><p>Reliable autonomy</p>
</li>
</ul>
<p>And then you get frustrated when it:</p>
<ul>
<li><p>Hallucinates</p>
</li>
<li><p>Misses context</p>
</li>
<li><p>Makes weird decisions</p>
</li>
</ul>
<p>That’s not a bug. It’s a mismatch in expectations.</p>
<p>AI isn’t failing.<br /><strong>You’re using the wrong mental model.</strong></p>
<hr />
<h1>The Right Way to Think About AI</h1>
<p>Instead, think like an exoskeleton designer.</p>
<p>They don’t ask:</p>
<blockquote>
<p>“How do we replace the human?”</p>
</blockquote>
<p>They ask:</p>
<blockquote>
<p>“Where does the human struggle? and how do we support that?”</p>
</blockquote>
<p>That’s the shift.</p>
<hr />
<h1>What This Looks Like in Practice</h1>
<p>Let’s make it concrete.</p>
<h2>❌ Bad approach (coworker mindset)</h2>
<p>“AI, build this entire feature.”</p>
<p>Result:<br />Messy output, lots of fixes, frustration.</p>
<hr />
<h2>✅ Better approach (exoskeleton mindset)</h2>
<p>Break the work into pressure points:</p>
<ul>
<li><p>Generate boilerplate → AI</p>
</li>
<li><p>Refactor repetitive code → AI</p>
</li>
<li><p>Explore edge cases → AI</p>
</li>
<li><p>Write tests → AI</p>
</li>
</ul>
<p>But:</p>
<ul>
<li><p>Define architecture → you</p>
</li>
<li><p>Make tradeoffs → you</p>
</li>
<li><p>Own quality → you</p>
</li>
</ul>
<p>AI handles the <strong>heavy lifting</strong>.<br />You stay the <strong>pilot</strong>.</p>
<hr />
<h1>Why This Works So Well</h1>
<p>Because AI is incredible at:</p>
<ul>
<li><p>Speed</p>
</li>
<li><p>Pattern recognition</p>
</li>
<li><p>Repetition</p>
</li>
<li><p>Expanding ideas quickly</p>
</li>
</ul>
<p>But weak at:</p>
<ul>
<li><p>Judgment</p>
</li>
<li><p>Context</p>
</li>
<li><p>Taste</p>
</li>
<li><p>Responsibility</p>
</li>
</ul>
<p>So the winning combo is simple:</p>
<p>👉 <strong>Human = direction</strong><br />👉 <strong>AI = amplification</strong></p>
<hr />
<h1>The Hidden Insight Most People Miss</h1>
<p>The real power of AI is not “doing work for you.”</p>
<p>It’s this:</p>
<p>👉 <strong>It removes friction from thinking and building</strong></p>
<p>You can:</p>
<ul>
<li><p>Explore 10 ideas instead of 1</p>
</li>
<li><p>Prototype in hours instead of days</p>
</li>
<li><p>Iterate without fatigue</p>
</li>
</ul>
<p>That’s not replacement.</p>
<p>That’s <strong>leverage</strong>.</p>
<hr />
<h1>Why “Autonomous Agents” Often Disappoint</h1>
<p>There’s a lot of hype around fully autonomous AI systems.</p>
<p>But here’s the reality:</p>
<p>The more autonomy you give AI, the more you lose:</p>
<ul>
<li><p>Control</p>
</li>
<li><p>Predictability</p>
</li>
<li><p>Trust</p>
</li>
</ul>
<p>And once trust drops, usage drops.</p>
<p>That’s why many “AI agents” feel impressive in demos…<br />but break down in real workflows.</p>
<p>The future isn’t fully autonomous systems.</p>
<p>👉 It’s tightly integrated systems that <strong>feel like an extension of you</strong></p>
<hr />
<h1>The Best Teams Already Get This</h1>
<p>The teams seeing real results with AI aren’t trying to replace people.</p>
<p>They’re doing something smarter:</p>
<p>They’re building workflows where:</p>
<ul>
<li><p>AI is always present</p>
</li>
<li><p>Always assisting</p>
</li>
<li><p>Always accelerating</p>
</li>
</ul>
<p>Not as a separate tool…</p>
<p>But as something you <em>wear</em>.</p>
<hr />
<h1>A Simple Test You Can Use Today</h1>
<p>Ask yourself:</p>
<blockquote>
<p>“Where am I doing repetitive, mentally draining work?”</p>
</blockquote>
<p>That’s your exoskeleton opportunity.</p>
<p>Start there.</p>
<p>Not with “build an AI agent.”</p>
<p>Not with “automate everything.”</p>
<p>Just:</p>
<p>👉 <strong>Remove one point of friction.</strong></p>
<p>Then another.<br />Then another.</p>
<hr />
<h1>The Future of AI (The Part That Actually Matters)</h1>
<p>The biggest wins won’t come from AI that replaces humans.</p>
<p>They’ll come from AI that:</p>
<ul>
<li><p>Feels invisible</p>
</li>
<li><p>Fits naturally into workflows</p>
</li>
<li><p>Makes you faster without thinking about it</p>
</li>
</ul>
<p>Like a great exoskeleton:</p>
<p>You don’t notice it.<br />You just feel stronger.</p>
<hr />
<h1>Bottom Line</h1>
<p>Stop asking:</p>
<blockquote>
<p>“What can AI do for me?”</p>
</blockquote>
<p>Start asking:</p>
<blockquote>
<p>“Where do I struggle? and how can AI amplify me there?”</p>
</blockquote>
<p>That one shift will put you ahead of most people still chasing the wrong idea.</p>
<p>Source: <a href="https://www.kasava.dev/blog/ai-as-exoskeleton">https://www.kasava.dev/blog/ai-as-exoskeleton</a></p>
]]></content:encoded></item><item><title><![CDATA[My AI Adoption Journey: From Skeptic to Daily Power User]]></title><description><![CDATA[I didn’t wake up one day and decide, “AI will change everything.”My journey into AI was slower, messier, and honestly… a bit reluctant at first.
If you’re a builder, engineer, or curious technologist,]]></description><link>https://mahdix.com/my-ai-adoption-journey-from-skeptic-to-daily-power-user</link><guid isPermaLink="true">https://mahdix.com/my-ai-adoption-journey-from-skeptic-to-daily-power-user</guid><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 02 Mar 2026 16:07:03 GMT</pubDate><content:encoded><![CDATA[<p>I didn’t wake up one day and decide, “AI will change everything.”<br />My journey into AI was slower, messier, and honestly… a bit reluctant at first.</p>
<p>If you’re a builder, engineer, or curious technologist, you might recognize yourself in this story.</p>
<h2>Phase 1 - Dismissing the Hype</h2>
<p>When modern AI tools first exploded onto the scene, I wasn’t impressed.</p>
<p>Yes, they could generate text.<br />Yes, they could answer questions.<br />But most outputs felt shallow, generic, and occasionally wrong in confident ways, the worst combination.</p>
<p>As someone who cares deeply about craft, correctness, and depth, I didn’t see how this could fit into serious work. It felt like autocomplete on steroids, not a real collaborator.</p>
<p>So I ignored it.</p>
<p>Big mistake.</p>
<h2>Phase 2 - Curiosity Wins</h2>
<p>Eventually curiosity got the better of me. I started experimenting cautiously.</p>
<p>Not for important work. Not for anything critical.</p>
<p>Just small tasks:</p>
<ul>
<li><p>Brainstorming ideas</p>
</li>
<li><p>Summarising articles</p>
</li>
<li><p>Generating rough outlines</p>
</li>
<li><p>Exploring unfamiliar topics</p>
</li>
</ul>
<p>And something surprising happened.</p>
<p>AI wasn’t useful because it was perfect.<br />It was useful because it was <em>fast</em>.</p>
<p>It became a thinking amplifier.</p>
<p>Instead of staring at a blank page, I now had a messy draft to react to.<br />Instead of spending hours researching basics, I had a starting map.</p>
<p>The value wasn’t in the answers, it was in momentum.</p>
<h2>Phase 3 - First Real Use Cases</h2>
<p>Then came the turning point: using AI for real work.</p>
<p>Not toy tasks. Not experiments.</p>
<p>Actual productivity.</p>
<p>I started with areas where speed mattered more than perfection:</p>
<h3>Writing</h3>
<p>Drafts that used to take hours suddenly took minutes.</p>
<p>Not publish-ready but close enough that editing was faster than starting from scratch.</p>
<h3>Coding</h3>
<p>AI didn’t replace engineering skill.<br />But it removed friction:</p>
<ul>
<li><p>Boilerplate code</p>
</li>
<li><p>Test scaffolding</p>
</li>
<li><p>Documentation</p>
</li>
<li><p>Refactoring suggestions</p>
</li>
</ul>
<p>It felt like having a junior engineer who never got tired and responded instantly.</p>
<h3>Learning</h3>
<p>AI became the fastest tutor I’ve ever had.</p>
<p>Instead of hunting through documentation, I could ask:</p>
<blockquote>
<p>“Explain this concept simply.”<br />“Give me examples.”<br />“Compare approaches.”</p>
</blockquote>
<p>The feedback loop shrank dramatically.</p>
<h2>Phase 4 - Changing How I Work</h2>
<p>At this point, AI stopped being a tool and started becoming part of my workflow.</p>
<p>I began to think differently about tasks:</p>
<p><strong>Old mindset:</strong><br />“How do I do this?”</p>
<p><strong>New mindset:</strong><br />“How do I collaborate with AI to do this faster and better?”</p>
<p>This shift is subtle but powerful.</p>
<p>AI works best when you treat it as:</p>
<ul>
<li><p>A brainstorming partner</p>
</li>
<li><p>A rapid prototyper</p>
</li>
<li><p>A tireless assistant</p>
</li>
<li><p>A second brain</p>
</li>
</ul>
<p>Not as an oracle.</p>
<p>The biggest unlock was learning to iterate.</p>
<p>The first output is rarely great.<br />But the fifth interaction often is.</p>
<h2>Phase 5 - Discovering the Real Superpower</h2>
<p>Here’s the insight that changed everything:</p>
<p><strong>AI compresses the distance between idea and execution.</strong></p>
<p>Things that used to feel like “maybe someday” projects suddenly became weekend experiments.</p>
<p>You can:</p>
<ul>
<li><p>Explore new domains without months of ramp-up</p>
</li>
<li><p>Prototype ideas quickly</p>
</li>
<li><p>Validate concepts before committing</p>
</li>
<li><p>Move from thought → artifact almost instantly</p>
</li>
</ul>
<p>It lowers the cost of trying.</p>
<p>And when trying becomes cheap, innovation accelerates.</p>
<h2>Phase 6 - Accepting the Limitations</h2>
<p>AI is powerful but not magical.</p>
<p>It still:</p>
<ul>
<li><p>Makes mistakes</p>
</li>
<li><p>Lacks true understanding</p>
</li>
<li><p>Requires guidance</p>
</li>
<li><p>Needs verification</p>
</li>
</ul>
<p>Blind trust is dangerous.</p>
<p>But total skepticism is equally limiting.</p>
<p>The winning strategy is <strong>informed partnership</strong>:</p>
<p>Trust, but verify.<br />Use it aggressively, but critically.<br />Leverage speed, guard quality.</p>
<h2>Phase 7 - Where I Am Now</h2>
<p>Today, AI touches almost everything I do:</p>
<ul>
<li><p>Writing</p>
</li>
<li><p>Coding</p>
</li>
<li><p>Research</p>
</li>
<li><p>Planning</p>
</li>
<li><p>Learning</p>
</li>
<li><p>Decision support</p>
</li>
</ul>
<p>Not because it replaces skill but because it multiplies it.</p>
<p>The best way to describe it:</p>
<p><strong>AI is like giving your brain a high-speed interface.</strong></p>
<p>You still steer.<br />You still decide.<br />You still own the outcome.</p>
<p>But the friction is dramatically lower.</p>
<h2>Lessons for Builders and Professionals</h2>
<p>If you’re still on the fence, here’s what I wish someone had told me earlier:</p>
<h3>1. Start Small</h3>
<p>Don’t wait for the perfect use case.<br />Use it for low-risk tasks first.</p>
<p>Momentum builds confidence.</p>
<h3>2. Learn to Prompt by Iterating</h3>
<p>Good results come from dialogue, not one-shot questions.</p>
<p>Start from something. Refine. Clarify. Push deeper.</p>
<h3>3. Use It Where Speed Matters</h3>
<p>AI shines in early stages:</p>
<ul>
<li><p>Exploration</p>
</li>
<li><p>Drafting</p>
</li>
<li><p>Brainstorming</p>
</li>
<li><p>Prototyping</p>
</li>
</ul>
<h3>4. Keep Your Expertise in the Loop</h3>
<p>Your judgment is the quality filter.</p>
<p>AI without human oversight produces mediocrity.<br />AI guided by expertise produces leverage.</p>
<h3>5. The Biggest Risk Is Ignoring It</h3>
<p>Not using AI today is like ignoring the internet in the late 90s.</p>
<p>You might survive for a while.<br />But you’ll slowly fall behind people who amplify themselves.</p>
<h2>Final Thought</h2>
<p>My AI adoption journey wasn’t about discovering a magic tool.</p>
<p>It was about discovering a new way to work.</p>
<p>AI doesn’t replace builders. It upgrades them.</p>
<p>And the people who learn to collaborate with it early will have an unfair advantage for years to come.</p>
<p>===</p>
<p>Source: My experience plus <a href="https://mitchellh.com/writing/my-ai-adoption-journey">https://mitchellh.com/writing/my-ai-adoption-journey</a></p>
]]></content:encoded></item><item><title><![CDATA[Meet NanoLang: The Tiny Programming Language Built for AI (and Curious Devs)]]></title><description><![CDATA[Imagine a programming language designed not just for humans but also for AI.Not adapted for AI. Not retrofitted. Built from scratch so AI can read and write it easily.
That’s exactly what NanoLang is.
It’s a tiny, experimental language created by vet...]]></description><link>https://mahdix.com/meet-nanolang-the-tiny-programming-language-built-for-ai-and-curious-devs</link><guid isPermaLink="true">https://mahdix.com/meet-nanolang-the-tiny-programming-language-built-for-ai-and-curious-devs</guid><category><![CDATA[AI]]></category><category><![CDATA[programming languages]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 16 Feb 2026 12:29:57 GMT</pubDate><content:encoded><![CDATA[<p>Imagine a programming language designed not just for humans but also for AI.<br />Not adapted for AI. Not retrofitted. <strong>Built from scratch so AI can read and write it easily.</strong></p>
<p>That’s exactly what <strong>NanoLang</strong> is.</p>
<p>It’s a tiny, experimental language created by veteran engineer <strong>Jordan Hubbard</strong>, with a bold idea:</p>
<blockquote>
<p>What if we made a programming language specifically for code-generating AI?</p>
</blockquote>
<p>And here’s the twist: even if you don’t care about AI, NanoLang is still a fascinating playground for learning how languages, compilers, and testing really work.</p>
<p>Let’s dive in.</p>
<hr />
<h1 id="heading-why-nanolang-exists">Why NanoLang Exists</h1>
<p>Modern programming languages are messy.</p>
<p>They’ve grown for decades, accumulated quirks, and contain endless edge cases. Humans learn them slowly; but AI models struggle even more.</p>
<p>NanoLang flips the approach:</p>
<p>👉 Instead of forcing AI to learn complex languages like C++ or JavaScript<br />👉 create a <strong>simple, unambiguous language that AI can generate perfectly</strong></p>
<p>The project’s goal is to reduce ambiguity in code generation by simplifying syntax and semantics so AI tools make fewer mistakes.</p>
<p>Think of it like:</p>
<ul>
<li><p>Esperanto for programming</p>
</li>
<li><p>or LEGO blocks instead of random scrap metal</p>
</li>
<li><p>or training wheels for compilers</p>
</li>
</ul>
<hr />
<h1 id="heading-what-makes-nanolang-special">What Makes NanoLang Special</h1>
<p>NanoLang isn’t just “small.” It’s intentionally designed around a few radical ideas.</p>
<h2 id="heading-1-syntax-that-ai-cant-misread">1. Syntax That AI Can’t Misread</h2>
<p>NanoLang uses very explicit, structured syntax (including prefix notation) so there’s almost zero ambiguity.</p>
<p>That matters because:</p>
<ul>
<li><p>AI models often misinterpret punctuation and operator precedence</p>
</li>
<li><p>humans rely on intuition</p>
</li>
<li><p>AI needs strict clarity</p>
</li>
</ul>
<p>NanoLang removes guesswork.</p>
<hr />
<h2 id="heading-2-testing-is-mandatory-yes-really">2. Testing Is Mandatory (Yes, Really)</h2>
<p>Here’s a wild feature:<br /><strong>every function must include tests.</strong></p>
<p>These are called <em>shadow blocks</em>, and they run at compile time.</p>
<p>So instead of writing tests later, NanoLang forces you to think:</p>
<blockquote>
<p>“How do I prove this works?”</p>
</blockquote>
<p>This bakes testing discipline directly into the language itself.</p>
<p>For junior devs, this is gold. It trains the mindset of:</p>
<ul>
<li><p>specification</p>
</li>
<li><p>verification</p>
</li>
<li><p>correctness</p>
</li>
</ul>
<p>Not just “it compiles.”</p>
<hr />
<h2 id="heading-3-it-compiles-to-c-for-speed">3. It Compiles to C (for Speed)</h2>
<p>NanoLang doesn’t run on a VM.</p>
<p>Instead, it transpiles to C, which then compiles to native code.</p>
<p>That gives you:</p>
<ul>
<li><p>portability</p>
</li>
<li><p>performance</p>
</li>
<li><p>interoperability with C libraries</p>
</li>
</ul>
<p>Basically: modern language ideas + proven C backend.</p>
<hr />
<h2 id="heading-4-self-hosting-the-compiler-is-written-in-nanolang">4. Self-Hosting: The Compiler Is Written in NanoLang</h2>
<p>One of the coolest achievements:</p>
<p>👉 NanoLang can compile itself.</p>
<p>This is called <strong>self-hosting</strong>, and it’s a milestone in language design.</p>
<p>It means:</p>
<ul>
<li><p>the language is expressive enough</p>
</li>
<li><p>the compiler is mature enough</p>
</li>
<li><p>the ecosystem is real</p>
</li>
</ul>
<p>Many famous languages went through this stage (C, Rust, Go).</p>
<p>Seeing it in a tiny experimental language is seriously impressive.</p>
<hr />
<h1 id="heading-nanolang-ai-a-new-way-to-code">NanoLang + AI = A New Way to Code</h1>
<p>NanoLang was designed for LLM code generation from day one.</p>
<p>The repo includes:</p>
<ul>
<li><p>formal language spec</p>
</li>
<li><p>training references</p>
</li>
<li><p>examples</p>
</li>
<li><p>tests for all constructs</p>
</li>
</ul>
<p>This makes it ideal for AI tools to learn.</p>
<p>And it works.</p>
<p>In one experiment, an AI generated a Mandelbrot fractal CLI tool in NanoLang (after some debugging) showing how AI can learn and use entirely new languages.</p>
<p>That’s huge.</p>
<p>Because it suggests a future where:</p>
<p>👉 AI doesn’t just write existing languages<br />👉 it writes <em>AI-native languages</em></p>
<hr />
<h1 id="heading-why-junior-developers-should-care">Why Junior Developers Should Care</h1>
<p>You might be thinking:</p>
<blockquote>
<p>“Cool… but I’ll never use NanoLang in production.”</p>
</blockquote>
<p>Probably true.</p>
<p>But that’s missing the point.</p>
<p>NanoLang is valuable because it teaches fundamentals <strong>better than most real languages</strong>.</p>
<p>Here’s what you learn fast:</p>
<h2 id="heading-how-compilers-work">How compilers work</h2>
<p>Because NanoLang is tiny, you can actually read the compiler.</p>
<h2 id="heading-language-design-trade-offs">Language design trade-offs</h2>
<p>Why immutability? Why prefix syntax? Why no globals?</p>
<h2 id="heading-testing-mindset">Testing mindset</h2>
<p>Mandatory tests change how you think about code.</p>
<h2 id="heading-ai-assisted-programming">AI-assisted programming</h2>
<p>You can watch an LLM learn a language from scratch.</p>
<p>That’s incredibly rare.</p>
<hr />
<h1 id="heading-a-glimpse-of-the-future-ai-native-programming">A Glimpse of the Future: AI-Native Programming</h1>
<p>NanoLang hints at something bigger.</p>
<p>Right now, AI writes code in:</p>
<ul>
<li><p>Python</p>
</li>
<li><p>JavaScript</p>
</li>
<li><p>C++</p>
</li>
<li><p>Rust</p>
</li>
</ul>
<p>But those languages weren’t built for AI.</p>
<p>NanoLang represents a new category:</p>
<blockquote>
<p><strong>AI-first programming languages</strong></p>
</blockquote>
<p>Languages optimized for:</p>
<ul>
<li><p>generation accuracy</p>
</li>
<li><p>verification</p>
</li>
<li><p>formal specs</p>
</li>
<li><p>low ambiguity</p>
</li>
</ul>
<p>As AI coding tools evolve, we may see:</p>
<ul>
<li><p>AI-native DSLs</p>
</li>
<li><p>auto-verified code languages</p>
</li>
<li><p>test-embedded syntax</p>
</li>
<li><p>machine-friendly compilers</p>
</li>
</ul>
<p>NanoLang is an early experiment in that direction.</p>
<hr />
<h1 id="heading-the-creators-perspective">The Creator’s Perspective</h1>
<p>NanoLang’s creator describes it as both:</p>
<ul>
<li><p>a serious experiment</p>
</li>
<li><p>and a creative exercise</p>
</li>
</ul>
<p>Designing a new language forces you to think deeply about programming itself; what’s essential and what’s accidental.</p>
<p>And that’s exactly why projects like this matter.</p>
<p>They expand how we think about software.</p>
<hr />
<h1 id="heading-should-you-try-nanolang">Should You Try NanoLang?</h1>
<p>If you’re a junior developer, absolutely.</p>
<p>Not because it’s practical; but because it’s enlightening.</p>
<p>You’ll gain:</p>
<ul>
<li><p>compiler intuition</p>
</li>
<li><p>language design awareness</p>
</li>
<li><p>testing discipline</p>
</li>
<li><p>AI coding insight</p>
</li>
</ul>
<p>Few projects offer all of that in one place.</p>
<hr />
<h1 id="heading-final-thoughts">Final Thoughts</h1>
<p>NanoLang is tiny.</p>
<p>But the idea behind it is massive.</p>
<p>It asks a simple question:</p>
<blockquote>
<p>What if programming languages were designed for both humans and AI?</p>
</blockquote>
<p>And then it actually builds one.</p>
<p>Whether NanoLang itself succeeds doesn’t matter.</p>
<p>Because experiments like this shape the future of programming.</p>
<p>And as AI becomes a bigger part of software development, understanding these ideas early could give you a huge advantage.</p>
<p>So if you’re curious, experimental, or just love learning how things work under the hood:</p>
<p>NanoLang is a perfect rabbit hole.</p>
<p>Source: <a target="_blank" href="https://github.com/jordanhubbard/nanolang">https://github.com/jordanhubbard/nanolang</a></p>
]]></content:encoded></item><item><title><![CDATA[Supercharged PostgreSQL Tips: Less Boring, More Powerful]]></title><description><![CDATA[Most PostgreSQL optimization guides feel like laundry lists of settings and indexes. But real performance gains often come from clever ideas, not just the usual tricks. Let’s unpack a few of those ideas in ways that actually make sense for you.
These...]]></description><link>https://mahdix.com/supercharged-postgresql-tips-less-boring-more-powerful</link><guid isPermaLink="true">https://mahdix.com/supercharged-postgresql-tips-less-boring-more-powerful</guid><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Tue, 10 Feb 2026 10:43:59 GMT</pubDate><content:encoded><![CDATA[<p>Most PostgreSQL optimization guides feel like laundry lists of settings and indexes. But <strong>real performance gains often come from clever ideas, not just the usual tricks</strong>. Let’s unpack a few of those ideas in ways that actually make sense for you.</p>
<p>These techniques go beyond “add an index and pray,” and they <em>really</em> help in situations where the database planner isn’t doing exactly what you want.</p>
<hr />
<h2 id="heading-1-stop-wasting-time-scanning-tables-when-you-dont-have-to">🎯 1. Stop Wasting Time Scanning Tables When You Don’t Have To</h2>
<p>Imagine a table of users with a <code>plan</code> column that is <strong>only allowed</strong> to be <code>'free'</code> or <code>'pro'</code> because of a <strong>check constraint</strong>:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">users</span> (
  <span class="hljs-keyword">id</span> <span class="hljs-built_in">SERIAL</span> PRIMARY <span class="hljs-keyword">KEY</span>,
  username <span class="hljs-built_in">TEXT</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  plan <span class="hljs-built_in">TEXT</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span> <span class="hljs-keyword">CHECK</span> (plan <span class="hljs-keyword">IN</span> (<span class="hljs-string">'free'</span>,<span class="hljs-string">'pro'</span>))
);
</code></pre>
<p>Now someone runs this:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> <span class="hljs-keyword">users</span> <span class="hljs-keyword">WHERE</span> plan = <span class="hljs-string">'Pro'</span>;
</code></pre>
<p>That returns <em>no rows</em> but PostgreSQL still scans every row! Why? Because the planner doesn’t automatically assume your constraint means some values are impossible.</p>
<h3 id="heading-clever-fix-enable-constraint-based-planning">🧠 Clever Fix: Enable Constraint-Based Planning</h3>
<p>By turning on:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SET</span> constraint_exclusion = <span class="hljs-keyword">on</span>;
</code></pre>
<p>PostgreSQL figures out up front that <code>'Pro'</code> (capital “P”) can’t match the check constraint. It returns the result instantly, no table scan!</p>
<p><strong>Why this matters:</strong> In reporting environments where analysts craft queries by hand, mistakes like casing can cause huge performance hits. Constraint exclusion can stop that.</p>
<hr />
<h2 id="heading-2-make-indexes-smaller-amp-faster-with-function-based-indexes">📉 2. Make Indexes Smaller &amp; Faster with Function-Based Indexes</h2>
<p>Let’s say you have a sale table:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> sale (
  <span class="hljs-keyword">id</span> <span class="hljs-built_in">SERIAL</span> PRIMARY <span class="hljs-keyword">KEY</span>,
  sold_at timestamptz <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  charged <span class="hljs-built_in">int</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>
);
</code></pre>
<p>And analysts run queries to sum sales per day. Without an index, PostgreSQL must scan all 10M row; slow!</p>
<p>The common solution is:<br />👉 <code>CREATE INDEX ON sale(sold_at);</code></p>
<p>That helps: query time drops but the index is <strong>huge</strong>, and PostgreSQL still considers time even though you only care about dates.</p>
<h3 id="heading-better-solution-index-only-what-you-need">🧠 Better Solution: Index Only What You <em>Need</em></h3>
<p>Instead, index just the <strong>date part</strong>:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">ON</span> sale ((date_trunc(<span class="hljs-string">'day'</span>, sold_at)));
</code></pre>
<p>This makes the index much smaller and faster, because PostgreSQL only needs to index dates, not full timestamps.</p>
<p><strong>Why this matters:</strong></p>
<ul>
<li><p>Smaller index = less disk usage</p>
</li>
<li><p>Faster scans = quicker aggregations</p>
</li>
<li><p>Better performance without huge overhead</p>
</li>
</ul>
<p>💡 This is called a <em>function-based index</em>: a powerful tool junior developers often overlook.</p>
<hr />
<h2 id="heading-3-avoid-human-errors-with-virtual-generated-columns">🧪 3. Avoid Human Errors with Virtual Generated Columns</h2>
<p>Function-based indexes work great <em>if</em> programmers always use the exact same expression in queries. That rarely happens!</p>
<h3 id="heading-safeguard-with-virtual-generated-columns">🧠 Safeguard with Virtual Generated Columns</h3>
<p>PostgreSQL 18 lets you define a column that <strong>computes itself</strong>:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">TABLE</span> sale
  <span class="hljs-keyword">ADD</span> sold_date <span class="hljs-built_in">DATE</span> <span class="hljs-keyword">GENERATED</span> <span class="hljs-keyword">ALWAYS</span> <span class="hljs-keyword">AS</span> (date_trunc(<span class="hljs-string">'day'</span>, sold_at));
</code></pre>
<p>Now every row stores a date value that matches the index expression exactly. Later queries like this use the index automatically:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> sold_date, <span class="hljs-keyword">SUM</span>(charged)
<span class="hljs-keyword">FROM</span> sale
<span class="hljs-keyword">WHERE</span> sold_date <span class="hljs-keyword">BETWEEN</span> <span class="hljs-string">'2025-01-01'</span> <span class="hljs-keyword">AND</span> <span class="hljs-string">'2025-01-31'</span>
<span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> sold_date;
</code></pre>
<p>No mistakes. No confusing expressions. PostgreSQL <em>just uses the index</em>.</p>
<p><strong>Why this matters:</strong></p>
<ul>
<li><p>Less manual SQL discipline</p>
</li>
<li><p>Faster by design</p>
</li>
<li><p>Cleaner schemas</p>
</li>
</ul>
<hr />
<h2 id="heading-4-enforce-uniqueness-with-less-overhead-using-hash-indexes">🔐 4. Enforce Uniqueness with Less Overhead Using Hash Indexes</h2>
<p>Suppose you have millions of URLs and want to ensure you never process the same URL twice.</p>
<p>A naive way is to use a unique B-Tree index:</p>
<pre><code class="lang-python">CREATE UNIQUE INDEX urls_unique ON urls(url);
</code></pre>
<p>That works but B-Tree indexes can get big and slow if your values are long strings.</p>
<h3 id="heading-a-better-fit-unique-hash-index">🧠 A Better Fit: Unique Hash Index</h3>
<p>Hash indexes store a fingerprint of the value instead of the full string. For long or complex text values (like URLs), this can be:</p>
<ul>
<li><p><em>much smaller</em></p>
</li>
<li><p>slightly faster for equality checks</p>
</li>
</ul>
<p>Hash indexes aren’t used everywhere, but <strong>when you only need uniqueness, they can be perfect</strong>.</p>
<hr />
<h2 id="heading-5-think-like-the-planner-help-postgresql-know-what-you-really-care-about">🔍 5. Think Like the Planner: Help PostgreSQL Know What You <strong>Really</strong> Care About</h2>
<p>PostgreSQL’s optimizer makes choices based on statistics it has about data. Sometimes these stats are outdated, especially if:</p>
<ul>
<li><p>the table gets updated frequently</p>
</li>
<li><p>auto-ANALYZE doesn’t kick in quickly enough</p>
</li>
</ul>
<h3 id="heading-tip-fine-tune-auto-analyze">🧠 Tip: Fine-Tune Auto-ANALYZE</h3>
<p>By lowering thresholds for a specific table, PostgreSQL refreshes statistics faster so the planner stops guessing and starts knowing.</p>
<p>This won’t magically speed every query but in high-write environments it can prevent bad plans from becoming permanent performance problems.</p>
<hr />
<h2 id="heading-final-thoughts-for-junior-developers">🧠 Final Thoughts for Junior Developers</h2>
<p>Here’s what to take away:</p>
<p>✅ Don’t just index everything blindly<br />👉 Know why an index helps <em>specific queries</em><br />👉 Smaller indexes often outperform bigger ones<br />👉 Planner hints like <code>constraint_exclusion</code> can eliminate needless work</p>
<p>These “unconventional” ideas are unconventional <strong>because most developers don’t think about them</strong> but they can make dramatic differences in real workloads.</p>
<hr />
<h2 id="heading-quick-cheat-sheet">🎓 Quick Cheat Sheet</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Optimisation</td><td>What It Does</td><td>When to Use It</td></tr>
</thead>
<tbody>
<tr>
<td><strong>constraint_exclusion</strong></td><td>Prevents pointless scans on impossible predicates</td><td>When queries include impossible lookups</td></tr>
<tr>
<td><strong>Function-based index</strong></td><td>Indexes only part of a value</td><td>When you aggregate on computed values</td></tr>
<tr>
<td><strong>Virtual generated column</strong></td><td>Locks in correct expressions</td><td>When team SQL varies</td></tr>
<tr>
<td><strong>Hash unique index</strong></td><td>Smaller unique enforcement</td><td>When values are long/complex</td></tr>
</tbody>
</table>
</div><hr />
<p>If you want to explore this in depth, try using <code>EXPLAIN ANALYZE</code> on your queries. It’s the best way to <em>understand what PostgreSQL is actually doing</em> before and after your changes.</p>
<p>Source: <a target="_blank" href="https://hakibenita.com/postgresql-unconventional-optimizations">https://hakibenita.com/postgresql-unconventional-optimizations</a></p>
]]></content:encoded></item><item><title><![CDATA[Pop-Ups Are Back, Baby, And Browsers Don't Care]]></title><description><![CDATA[Remember when we defeated pop-up ads? Yeah, they're back. And this time, nobody's fighting them.
The Good Old Days (Of Terrible Ads)
Back around 2000, the internet was a warzone. Visit any website and BAM!!! random windows would explode onto your scr...]]></description><link>https://mahdix.com/pop-ups-are-back-baby-and-browsers-dont-care</link><guid isPermaLink="true">https://mahdix.com/pop-ups-are-back-baby-and-browsers-dont-care</guid><category><![CDATA[web]]></category><category><![CDATA[HTML5]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 26 Jan 2026 11:41:34 GMT</pubDate><content:encoded><![CDATA[<p><strong>Remember when we defeated pop-up ads? Yeah, they're back. And this time, nobody's fighting them.</strong></p>
<h2 id="heading-the-good-old-days-of-terrible-ads">The Good Old Days (Of Terrible Ads)</h2>
<p>Back around 2000, the internet was a warzone. Visit any website and BAM!!! random windows would explode onto your screen, covering everything with ads for stuff you never wanted. It was absolute chaos.</p>
<p>So what happened? Browser makers stepped up. Firefox made it a headline feature in 2004. Internet Explorer followed. News articles celebrated. Pop-up blockers became standard. We won!</p>
<p>...Or so we thought.</p>
<h2 id="heading-the-ads-evolved-the-browsers-didnt">The Ads Evolved. The Browsers Didn't.</h2>
<p>Here's the twist: <strong>Pop-ups never actually died. They just changed clothes.</strong></p>
<p>Those old pop-ups opened new browser windows. Easy to block. Today's pop-ups? They're built <em>inside</em> the webpage itself, floating boxes, full-screen takeovers, fake "sign up for our newsletter" modals that ambush you the moment you start reading.</p>
<p>Same annoying behaviour. Same tricks (tiny close buttons, misleading designs, popping up mid-scroll). But now browsers don't even try to stop them.</p>
<p>The ad industry adapted. Browser developers moved on to other things. And here we are, back to square one, except worse.</p>
<h2 id="heading-why-isnt-anyone-fixing-this">Why Isn't Anyone Fixing This?</h2>
<p>The original pop-up blockers weren't perfect either. There were edge cases, false positives, workarounds needed. But browser teams solved those problems because they wanted to make browsing better.</p>
<p>Today? Crickets. Firefox still has documentation about those 2004-era pop-up settings that nobody uses anymore because websites stopped using actual pop-up windows.</p>
<p>Meanwhile, every shopping site, news article, and blog hammers you with in-page pop-ups that browsers completely ignore.</p>
<h2 id="heading-the-fix-we-need-pop-up-blocking-20">The Fix We Need: Pop-Up Blocking 2.0</h2>
<p>The author's argument is simple: <strong>browsers need to start this fight again.</strong></p>
<p>Yes, it's technically harder. Yes, some website developers will complain. But here's the thing: they complained in 2004 too. We ignored them then, and websites adapted.</p>
<p>A browser that actually blocked these modern pop-ups? That would be headline news. People would switch browsers for that feature alone.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>Browsers decide how they show you websites. They're not just passive windows. They can choose to protect users from garbage experiences.</p>
<p>Twenty years ago, they chose to fight for us. Now it's time to do it again.</p>
<hr />
<p><em>The ball's in your court, Mozilla. Chrome. Safari. Want users to love you again? Build something worth talking about.</em></p>
<p>Source: <a target="_blank" href="https://www.smokingonabike.com/2025/12/31/web-browsers-have-stopped-blocking-pop-ups/">https://www.smokingonabike.com/2025/12/31/web-browsers-have-stopped-blocking-pop-ups/</a></p>
]]></content:encoded></item><item><title><![CDATA[Your Job Isn’t Just Writing Code. It’s Proving It Works]]></title><description><![CDATA[Software development isn’t about churning out lines of code and hoping they work. Anyone can generate code, even AI does that now. What separates the pros from the amateurs is this:
✅ Your job is this: Deliver code you have personally verified works
...]]></description><link>https://mahdix.com/your-job-isnt-just-writing-code-its-proving-it-works</link><guid isPermaLink="true">https://mahdix.com/your-job-isnt-just-writing-code-its-proving-it-works</guid><category><![CDATA[coding]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 19 Jan 2026 11:10:45 GMT</pubDate><content:encoded><![CDATA[<p>Software development isn’t about churning out lines of code and <em>hoping</em> they work. Anyone can generate code, even AI does that now. What <em>separates the pros from the amateurs</em> is this:</p>
<h2 id="heading-your-job-is-this-deliver-code-you-have-personally-verified-works">✅ Your job is this: <strong>Deliver code <em>you have personally verified works</em></strong></h2>
<p>No ifs, no maybes, no “let reviewers find the bugs.”</p>
<hr />
<h2 id="heading-the-problem-today">😬 The Problem Today</h2>
<p>We’ve all seen it: a huge AI-generated pull request lands with no evidence that it actually works. Someone else, usually a reviewer or maintainer, has to chase down bugs, figure out if it even <em>runs</em>, and fix it. That’s not teamwork, that’s dumping your mess on someone else.</p>
<p>That <em>used to</em> be forgivable. Today? There’s no excuse.</p>
<hr />
<h2 id="heading-real-software-engineering-proof">🧠 Real Software Engineering = Proof</h2>
<p>A real engineer doesn’t just <em>hope</em> code works. They <em>prove</em> it. There are <strong>two essential steps</strong> every time:</p>
<h3 id="heading-1-manual-testing-first">1️⃣ <strong>Manual testing first</strong></h3>
<p>You must see your code do the right thing yourself.</p>
<ul>
<li><p>Run it</p>
</li>
<li><p>Try it</p>
</li>
<li><p>Break it<br />  If you haven’t seen it actually work with your own eyes, it <em>might as well</em> be imaginary.</p>
</li>
</ul>
<p>A smart trick: capture a terminal log or short screen recording that shows your change working and include that in your pull request.</p>
<h3 id="heading-2-then-write-an-automated-test">2️⃣ <strong>Then write an automated test</strong></h3>
<p>Tests are your safety net, they prove your change still works tomorrow, next month, and after someone else touches it.<br />Modern tools make this easier than ever. Skip this only if you <em>love</em> bugs.</p>
<p><strong>Manual testing + automated tests = Proof.</strong></p>
<hr />
<h2 id="heading-even-if-ai-wrote-the-code">🤖 Even If AI Wrote the Code</h2>
<p>Yes, modern coding assistants like Claude Code and Codex CLI can write and test code for you. But that doesn’t mean you can trust them blindly.</p>
<p>You still need to:</p>
<ul>
<li><p>Teach the tool to <em>show</em> it tested the code.</p>
</li>
<li><p>Understand the test results.</p>
</li>
<li><p>Add proper automated tests to your repo yourself.</p>
</li>
</ul>
<p>In other words: <strong>AI assists, you’re responsible.</strong></p>
<hr />
<h2 id="heading-why-this-matters">💡 Why This Matters</h2>
<p>Submitting unproven code:</p>
<ul>
<li><p>Wastes reviewers’ time</p>
</li>
<li><p>Introduces hidden bugs</p>
</li>
<li><p>Shifts the real work to someone else</p>
</li>
</ul>
<p>Proving your code works:</p>
<ul>
<li><p>Saves time</p>
</li>
<li><p>Builds trust</p>
</li>
<li><p>Makes you a stronger engineer</p>
</li>
</ul>
<p>It’s not optional. It’s the <strong>real definition of professional software development</strong> in an AI-augmented world.</p>
<hr />
<h2 id="heading-tldr-the-new-rules-of-quality-code">🔥 TL;DR: The New Rules of Quality Code</h2>
<p>✔ Run your code yourself<br />✔ Show it works<br />✔ Add automated tests<br />✔ Don’t submit anything unless it’s proven</p>
<p>Treat your evidence of working code like gold. Anyone can write lines <em>you prove results.</em></p>
<p>Source: <a target="_blank" href="https://simonwillison.net/2025/Dec/18/code-proven-to-work/">https://simonwillison.net/2025/Dec/18/code-proven-to-work/</a></p>
]]></content:encoded></item><item><title><![CDATA[Why You Should Give HTMX a Shot 🚀]]></title><description><![CDATA[Ever feel trapped between building websites the "old school" way with plain HTML, or drowning in JavaScript framework complexity? There's a brilliant middle ground you might be missing.
The Problem We All Know
You've got two typical choices for build...]]></description><link>https://mahdix.com/why-you-should-give-htmx-a-shot</link><guid isPermaLink="true">https://mahdix.com/why-you-should-give-htmx-a-shot</guid><category><![CDATA[HTML5]]></category><category><![CDATA[htmx]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Tue, 13 Jan 2026 11:47:42 GMT</pubDate><content:encoded><![CDATA[<p>Ever feel trapped between building websites the "old school" way with plain HTML, or drowning in JavaScript framework complexity? There's a brilliant middle ground you might be missing.</p>
<h2 id="heading-the-problem-we-all-know">The Problem We All Know</h2>
<p>You've got two typical choices for building interactive websites:</p>
<p><strong>Plain HTML:</strong> Simple and reliable, but what happens when you need a button that updates <em>part</em> of a page without refreshing everything? Or a search box that shows results as you type?</p>
<p><strong>React/Vue/Angular</strong>: Sure, they work. But suddenly you're managing hundreds of dependencies, waiting ages for builds, and debugging why something called <code>useEffect</code> runs twice.</p>
<p>For most projects, dashboards, admin panels, forms, e-commerce sites this feels like overkill.</p>
<h2 id="heading-enter-htmx-the-sweet-spot">Enter HTMX: The Sweet Spot</h2>
<p>HTMX is refreshingly simple. Here's what it does:</p>
<ul>
<li><p><strong>Any HTML element</strong> can make HTTP requests</p>
</li>
<li><p>Your server returns <strong>actual HTML</strong> (not JSON)</p>
</li>
<li><p>That HTML gets <strong>swapped into your page</strong> automatically</p>
</li>
<li><p>You write <strong>zero JavaScript</strong></p>
</li>
<li><p>The whole thing is <strong>just 14kb</strong></p>
</li>
</ul>
<p>That's it. Seriously.</p>
<p>Here's a working example:</p>
<p>html</p>
<pre><code class="lang-html"><span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">hx-post</span>=<span class="hljs-string">"/clicked"</span> <span class="hljs-attr">hx-swap</span>=<span class="hljs-string">"outerHTML"</span>&gt;</span>
    Click me
<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
</code></pre>
<p>Click the button, HTMX sends a request, the server returns HTML, and it replaces the button. No complicated setup. No build tools. No npm nightmares.</p>
<h2 id="heading-real-results-from-real-teams">Real Results From Real Teams</h2>
<p>A company called Contexte rebuilt their React app using HTMX. The results?</p>
<ul>
<li><p><strong>67% less code</strong> (21,500 lines → 7,200 lines)</p>
</li>
<li><p><strong>96% fewer dependencies</strong> (255 packages → 9 packages)</p>
</li>
<li><p><strong>8x faster builds</strong> (40 seconds → 5 seconds)</p>
</li>
<li><p><strong>50-60% faster page loads</strong></p>
</li>
</ul>
<p>They deleted two-thirds of their codebase and the app got <em>better</em>.</p>
<h2 id="heading-quick-htmx-facts">Quick HTMX Facts</h2>
<ul>
<li><p><strong>Created by:</strong> Carson Gross in 2020 (evolved from intercooler.js)</p>
</li>
<li><p><strong>Philosophy:</strong> Hypermedia-driven applications, the way the web was designed</p>
</li>
<li><p><strong>Works with:</strong> Any backend language (Python, Ruby, Go, Java, PHP, you name it)</p>
</li>
<li><p><strong>Learning curve:</strong> About an afternoon to get productive</p>
</li>
<li><p><strong>Key attributes:</strong> <code>hx-get</code>, <code>hx-post</code>, <code>hx-swap</code>, <code>hx-trigger</code>, <code>hx-target</code></p>
</li>
</ul>
<h2 id="heading-when-htmx-shines">When HTMX Shines</h2>
<p>Perfect for:</p>
<ul>
<li><p>Admin dashboards</p>
</li>
<li><p>E-commerce sites</p>
</li>
<li><p>SaaS applications</p>
</li>
<li><p>Content-heavy websites</p>
</li>
<li><p>Internal tools</p>
</li>
<li><p>Any "forms and tables" application</p>
</li>
</ul>
<h2 id="heading-when-to-skip-it">When to Skip It</h2>
<p>Be honest: HTMX isn't ideal for:</p>
<ul>
<li><p>Real-time collaborative editing (think Google Docs)</p>
</li>
<li><p>Heavy client-side computation (video editors, CAD tools)</p>
</li>
<li><p>Offline-first apps</p>
</li>
<li><p>Genuinely complex UI state</p>
</li>
</ul>
<p>But here's the thing: most of us aren't building those. We're building apps that are <em>pretending</em> to need that complexity.</p>
<h2 id="heading-try-it-this-weekend">Try It This Weekend</h2>
<p>Here's the pitch: pick a side project. Add one script tag. Write one <code>hx-get</code> attribute. See what happens.</p>
<p>html</p>
<pre><code class="lang-html"><span class="hljs-tag">&lt;<span class="hljs-name">script</span> <span class="hljs-attr">src</span>=<span class="hljs-string">"https://unpkg.com/htmx.org@1.9.10"</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">script</span>&gt;</span>
</code></pre>
<p>If you hate it, you've lost a weekend. But chances are, you'll wonder why web development ever got so complicated in the first place.</p>
<p><strong>Resources:</strong></p>
<ul>
<li><p><a target="_blank" href="https://htmx.org">htmx.org</a> Official docs</p>
</li>
<li><p><a target="_blank" href="https://hypermedia.systems">hypermedia.systems</a> Free book on the approach</p>
</li>
</ul>
<p>The web was built on simplicity. Maybe it's time to get back to that.</p>
]]></content:encoded></item><item><title><![CDATA[The One File That Makes Claude 100x Smarter With Your Codebase]]></title><description><![CDATA[If you are using Claude Code or any AI agent inside your codebase, the CLAUDE.md file can change everything. A good one feels like giving Claude a proper first day at work. A messy one feels like hiring someone and letting them figure things out alon...]]></description><link>https://mahdix.com/the-one-file-that-makes-claude-100x-smarter-with-your-codebase</link><guid isPermaLink="true">https://mahdix.com/the-one-file-that-makes-claude-100x-smarter-with-your-codebase</guid><category><![CDATA[claude.ai]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[llm]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 29 Dec 2025 11:00:50 GMT</pubDate><content:encoded><![CDATA[<p>If you are using Claude Code or any AI agent inside your codebase, the <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> file can change everything. A good one feels like giving Claude a proper first day at work. A messy one feels like hiring someone and letting them figure things out alone.</p>
<p>This post will show you what to put inside your <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a>, what to avoid, and how to write it in a way that Claude can understand and follow easily.</p>
<hr />
<h2 id="heading-claude-starts-with-zero-knowledge">Claude starts with zero knowledge</h2>
<p>Large language models do not automatically know your repo. Every new session is basically an empty mind.<br />The <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> file is where you introduce the AI to your project: what it is, how it works, how to build it, and where important things live.</p>
<p>Think of it like a short onboarding note for a new colleague.</p>
<hr />
<h2 id="heading-what-to-include">What to include</h2>
<p>Keep things simple and direct. If someone could read it in a couple of minutes and feel ready to contribute, you are on the right track.</p>
<p>A useful <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> should cover:</p>
<ol>
<li><p>What the project is</p>
</li>
<li><p>Why it exists and what problem it solves</p>
</li>
<li><p>How to work with it (install, run, test, deploy)</p>
</li>
<li><p>How the folders and main parts of the repo are structured</p>
</li>
</ol>
<p>This is not documentation for users. It is guidance for a developer stepping into the codebase for the first time.</p>
<hr />
<h2 id="heading-short-is-better-than-long">Short is better than long</h2>
<p>Claude does not read everything just because it is there. It tries to use what feels relevant. If you add too much detail, you increase the chance that important parts get lost inside the noise.</p>
<p>Keep it light. Trim anything that is not essential for understanding the project. A shorter file is easier for humans and AI to absorb. Many good <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> files are far under 300 lines. Some teams keep theirs closer to one page and still get great results.</p>
<hr />
<h2 id="heading-put-extra-details-in-separate-docs">Put extra details in separate docs</h2>
<p>If your project is large, do not try to squeeze every piece of knowledge into one file. Create small, focused documents that Claude can read only when needed.</p>
<p>Example folder:</p>
<pre><code class="lang-python">agent_docs/
  build.md
  tests.md
  architecture.md
  migrations.md
</code></pre>
<p>Then inside <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> you can simply say:</p>
<pre><code class="lang-python">If you need more detail, check the agent_docs files above.
</code></pre>
<p>This approach keeps the main file clean and gives Claude the option to pull more context when required.</p>
<hr />
<h2 id="heading-avoid-turning-claude-into-a-linter">Avoid turning Claude into a linter</h2>
<p>Code style rules, formatting choices and long lists of do this, don't do that make <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> heavy and unhelpful.<br />Let your linters, CI and formatters handle style. Claude is better used for reasoning, planning and writing code, not nitpicking style.</p>
<p>If you want Claude to fix formatting, just show it the linter output directly. That is faster and more reliable.</p>
<hr />
<h2 id="heading-write-it-yourself-rather-than-auto-generating-it">Write it yourself rather than auto generating it</h2>
<p>Yes, you could ask an AI to generate your <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> for you. But since this file shapes how the agent interacts with your code forever, spending time to write it well is worth it.</p>
<p>A clear, human written document will reduce confusion later and help your workflow scale. You do not need it to be perfect on day one. You just need it to be useful and kept up to date.</p>
<hr />
<h2 id="heading-quick-checklist">Quick checklist</h2>
<p>A strong <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a>:</p>
<ul>
<li><p>is short, friendly and easy to skim</p>
</li>
<li><p>explains the project goal and structure in simple language</p>
</li>
<li><p>describes how to run, build and test the app</p>
</li>
<li><p>contains links to deeper files instead of dumping everything inside</p>
</li>
<li><p>is updated occasionally rather than forgotten</p>
</li>
</ul>
<hr />
<h2 id="heading-final-thoughts">Final thoughts</h2>
<p><a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> is not just documentation. It is how you teach an AI to think inside your codebase. If you treat it like onboarding instead of a rulebook, you will get faster and more reliable help from Claude every time you open a session.</p>
<p>If you want, tell me about your tech stack and what your app does. I can help you draft a clean <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> that fits your project perfectly.</p>
]]></content:encoded></item><item><title><![CDATA[Should Your AI Agent Pay Taxes? The Trillion Dollar Question Nobody's Asking]]></title><description><![CDATA[The robots are coming for our jobs. But here’s the twist: they’re also coming for our tax base.
Amazon, Meta, UPS, Big Tech is raking in record profits while quietly replacing humans with algorithms. Great for shareholders. Terrifying for government ...]]></description><link>https://mahdix.com/should-your-ai-agent-pay-taxes-the-trillion-dollar-question-nobodys-asking</link><guid isPermaLink="true">https://mahdix.com/should-your-ai-agent-pay-taxes-the-trillion-dollar-question-nobodys-asking</guid><category><![CDATA[AI]]></category><category><![CDATA[agi]]></category><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 15 Dec 2025 16:43:05 GMT</pubDate><content:encoded><![CDATA[<p><strong>The robots are coming for our jobs. But here’s the twist: they’re also coming for our tax base.</strong></p>
<p>Amazon, Meta, UPS, Big Tech is raking in record profits while quietly replacing humans with algorithms. Great for shareholders. Terrifying for government budgets.</p>
<p>Here‘s the math that should keep politicians up at night: <strong>over 80% of US federal revenue comes from taxing working humans</strong>; income tax, payroll tax, Social Security, Medicare. When those humans get replaced by software? That money vanishes.</p>
<h2 id="heading-the-bill-gates-bombshell">The Bill Gates Bombshell</h2>
<p>Back in 2017, Bill Gates dropped a radical idea: <strong>make robots pay taxes.</strong></p>
<p>Not literally (R2-D2 won’t be filing returns anytime soon). But companies that automate away human jobs? They should pay what those workers would have contributed. Nobel laureate Edmund Phelps backs him up.</p>
<p>The logic is brutally simple. Fire 100 customer service reps, deploy a chatbot instead? Pay the equivalent of their tax burden. Call it a "robot tax" or an "AI levy" either way, it forces companies to share the gains from automation.</p>
<h2 id="heading-two-camps-one-problem">Two Camps, One Problem</h2>
<p><strong>Team Robot Tax</strong> says we need a financial buffer. The Industrial Revolution was chaos; displaced workers, riots, decades of suffering before new equilibria emerged. We can be smarter this time. Tax automation, fund retraining, cushion the blow.</p>
<p><strong>Team Free Market</strong> (including Brookings researcher Sanjay Patnaik) argues a targeted AI tax is a bureaucratic nightmare. How do you define "AI"? How do you measure job displacement? Better solution: just raise capital gains taxes on the companies benefiting from automation.</p>
<h2 id="heading-why-this-matters-now">Why This Matters NOW</h2>
<p>The UK is already debating this. MP Neil Duncan-Jordan is pushing for companies using AI to cut jobs to face special taxes. His pitch: this isn’t about someone using ChatGPT to plan a meeting. It’s about corporations replacing entire departments.</p>
<p>And the IMF? They’re warning that this isn’t some distant future problem, it’s happening today.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>We're watching the biggest shift in how work happens since the Industrial Revolution. The question isn’t whether AI will transform the economy, it’s whether we'll have any tax revenue left when it does.</p>
<p>The robots are getting smarter. It’s time our tax policy did too.</p>
<hr />
<p><em>What do you think? should AI pay its fair share? Drop your take in the comments.</em></p>
]]></content:encoded></item><item><title><![CDATA[The Shape That Changed Everything: What Is a Manifold?]]></title><description><![CDATA[Imagine standing in a field. The ground looks flat. But in reality, you're on a giant sphere flying through space at 67,000 miles per hour.
That simple idea that something can look flat up close even if it's curved overall led to one of the most impo...]]></description><link>https://mahdix.com/the-shape-that-changed-everything-what-is-a-manifold</link><guid isPermaLink="true">https://mahdix.com/the-shape-that-changed-everything-what-is-a-manifold</guid><category><![CDATA[Mathematics]]></category><category><![CDATA[Manifold]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 08 Dec 2025 11:56:49 GMT</pubDate><content:encoded><![CDATA[<p><strong>Imagine standing in a field. The ground looks flat. But in reality, you're on a giant sphere flying through space at 67,000 miles per hour.</strong></p>
<p>That simple idea that something can <em>look</em> flat up close even if it's curved overall led to one of the most important concepts in modern math: the <strong>manifold</strong>.</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/2/2d/BoysSurfaceTopView.PNG" alt /></p>
<h2 id="heading-the-quiet-genius-behind-the-idea">The Quiet Genius Behind the Idea</h2>
<p>In 1854, a shy mathematician named <strong>Bernhard Riemann</strong> gave a lecture in Germany that almost didn’t happen. He was terrified of public speaking. He had even planned to become a pastor like his father.</p>
<p>But that day, he introduced a new way to think about shapes and space. Riemann suggested that you could study complicated spaces by zooming in on small pieces that look flat and familiar. At first, people didn’t pay much attention. Decades later, his idea became the foundation of modern physics.</p>
<h2 id="heading-so-what-is-a-manifold">So… What Is a Manifold?</h2>
<p>A <strong>manifold</strong> is any shape that looks flat if you zoom in close enough.</p>
<ul>
<li><p>Walk along a <strong>circle</strong> as an ant. Up close, it feels like a straight line. That’s a <em>one-dimensional manifold</em>.</p>
</li>
<li><p>Walk across the <strong>Earth</strong>. It feels flat, even though it’s curved. That’s a <em>two-dimensional manifold</em>.</p>
</li>
</ul>
<p>But not everything counts. A <strong>figure-8</strong> doesn’t work because the crossing point is messy. No matter how much you zoom in, it doesn’t look like a simple line. An ant would know something strange is happening.</p>
<h2 id="heading-why-does-this-matter">Why Does This Matter?</h2>
<p>Manifolds show up everywhere in science and technology.</p>
<ul>
<li><p><strong>Einstein</strong> used Riemann’s ideas to describe <strong>spacetime</strong> a four-dimensional manifold. Gravity, in his view, is just curvature.</p>
</li>
<li><p><strong>Engineers</strong> use manifolds to model complex machines.</p>
</li>
<li><p><strong>Data scientists</strong> use them to find hidden structure in huge datasets.</p>
</li>
<li><p><strong>Roboticists</strong> use them to plan smooth, safe movements.</p>
</li>
</ul>
<p>As one mathematician said: asking how scientists use manifolds is like asking how they use numbers; they’re that fundamental.</p>
<h2 id="heading-the-clever-part">The Clever Part</h2>
<p>The true power of manifolds is that they turn hard problems into easy ones.</p>
<p>Because every small patch looks flat, you can use simple math on each patch, then piece everything together; just like using many small paper maps to represent the whole Earth. Each map is a “chart,” and all of them together form an “atlas.”</p>
<hr />
<p><strong>From a nervous lecture in 1854 to the very shape of the universe, manifolds show how a simple observation can unlock an entire world of ideas.</strong></p>
<p>Sometimes the ground <em>does</em> look flat. You’re just seeing a tiny part of something much bigger.</p>
]]></content:encoded></item><item><title><![CDATA[The Forgotten Art of Immutability: Why Your Variables Should Stay Put]]></title><description><![CDATA[Confession time: Python turned me into a slob. After years of writing strict, spotless C++ where const guarded everything like a loyal watchdog, I fell into Python’s cozy world and started treating variables like sticky Post-it notes: reuse them, scr...]]></description><link>https://mahdix.com/the-forgotten-art-of-immutability-why-your-variables-should-stay-put</link><guid isPermaLink="true">https://mahdix.com/the-forgotten-art-of-immutability-why-your-variables-should-stay-put</guid><category><![CDATA[coding]]></category><category><![CDATA[immutable]]></category><category><![CDATA[Functional Programming]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 24 Nov 2025 13:44:34 GMT</pubDate><content:encoded><![CDATA[<p>Confession time: Python turned me into a slob. After years of writing strict, spotless C++ where const guarded everything like a loyal watchdog, I fell into Python’s cozy world and started treating variables like sticky Post-it notes: reuse them, scribble over them, hope for the best. Then one day I realised I was casually overwriting variables all over my functions. My past self would have screamed. That was my wake-up call: immutability matters, <em>a lot</em>.</p>
<h2 id="heading-the-single-assignment-principle">The Single Assignment Principle</h2>
<p>There’s one tiny rule that can massively boost your code quality: give a variable one value, once, and then leave it alone. Unless you’re doing actual iterative math in a loop, treat variables as write-once, read-many treasures. This isn’t some ivory-tower purity test, it's survival wisdom forged in the fires of 2 AM debugging sessions.</p>
<p>Picture this: you're stepping through a debugger, and every calculation you ever made is still there, neatly preserved. You can follow the trail from userAge to adjustedAge to finalAge like breadcrumbs through the forest. Overwrite those variables, and poof; the trail vanishes. Now you’re staring at a single variable that’s been shapeshifting like a superhero.</p>
<p>The real nightmare hits when you refactor. You extract a chunk of code, move it somewhere else, and suddenly that reused variable name either doesn’t exist anymore or means something totally different. The code breaks silently, and you spend hours hunting a ghost bug that never would’ve existed if you’d created userAge2 instead of bulldozing userAge.</p>
<h2 id="heading-the-language-landscape">The Language Landscape</h2>
<p>Languages all treat immutability differently. C and C++ give you const, which might honestly be one of the most powerful keywords ever invented. Marking almost every variable const should be the default. In fact, I wish the languages had flipped the rules: everything immutable unless you explicitly declare it mutable. Imagine the bugs that would never be born.</p>
<p>Functional languages like Haskell and Erlang go all-in: variables never change, full stop. It sounds harsh, but it wipes out entire classes of state-related bugs before they can bite. Rust takes a similar stance (immutability first) forcing you to mark variables as mut when you really mean it.</p>
<p>JavaScript tried to clean up its act by adding const and let to replace the chaotic var, though const still lets you mutate objects under the hood. Scala and Kotlin offer val (immutable) and var (mutable), gently steering developers toward safer patterns without yelling about it.</p>
<p>Then we have Python, beautiful, flexible, dangerously forgiving. It hands you a flaming torch and says, "Try not to burn down the house." No built-in immutable variables, just conventions and discipline. Turns out I needed to relearn both.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>Immutability isn’t about being a zealot, it’s about being kind to future you. When every variable has a clear purpose and never secretly changes, your code becomes simpler to read, easier to debug, and far safer to refactor. Some languages enforce this for you, others leave you on your own, but the lesson is universal: variables that stay put lead to programs that stay solid.</p>
]]></content:encoded></item><item><title><![CDATA[How to Avoid "Vibe Coding Hell"]]></title><description><![CDATA[The Old Problem: Tutorial Hell
A few years ago, new coders faced "tutorial hell", watching endless YouTube tutorials, copying every line perfectly, then freezing when trying to build something new, alone.
The solution was simple: write more code, wat...]]></description><link>https://mahdix.com/how-to-avoid-vibe-coding-hell</link><guid isPermaLink="true">https://mahdix.com/how-to-avoid-vibe-coding-hell</guid><category><![CDATA[vibe coding]]></category><category><![CDATA[llm]]></category><category><![CDATA[Junior developer ]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 10 Nov 2025 17:35:11 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-the-old-problem-tutorial-hell">The Old Problem: Tutorial Hell</h2>
<p>A few years ago, new coders faced <strong>"tutorial hell"</strong>, watching endless YouTube tutorials, copying every line perfectly, then freezing when trying to build something new, alone.</p>
<p>The solution was simple: <strong>write more code, watch fewer videos</strong>. Learn by doing, not by following along.</p>
<p>But today, we face a new problem.</p>
<h2 id="heading-the-new-problem-vibe-coding-hell">The New Problem: Vibe Coding Hell</h2>
<p>AI coding assistants like ChatGPT and GitHub Copilot are everywhere. They're powerful tools, but they've created a new trap.</p>
<p><strong>What is "vibe coding hell"?</strong></p>
<p>Building projects quickly with AI help, but not understanding the code you're writing.</p>
<p><strong>The symptoms:</strong></p>
<ul>
<li><p>"I can't build anything without my AI assistant"</p>
</li>
<li><p>"My project works, but I have no idea how"</p>
</li>
<li><p>Feeling lost when AI makes mistakes</p>
</li>
</ul>
<h2 id="heading-why-this-matters">Why This Matters</h2>
<p>A 2025 study found developers using AI <em>thought</em> they were 25% faster, but were actually <strong>19% slower</strong>. When AI does the thinking, your brain doesn't do the work.</p>
<p>Worse, it creates a dangerous mindset: "Why learn this when AI already knows it?" This leads to developers who can't solve problems without AI assistance.</p>
<h2 id="heading-two-problems-with-learning-from-ai">Two Problems with Learning from AI</h2>
<p><strong>Problem 1: AI Agrees with Everything</strong></p>
<p>AI rarely challenges you or points out flaws. But real learning requires <strong>friction</strong>; that uncomfortable struggle where growth happens.</p>
<p><strong>Problem 2: AI Gives Wishy-Washy Answers</strong></p>
<p>Responses like "Some prefer X, others prefer Y" don't help beginners. You need clear direction and strong opinions when learning.</p>
<h2 id="heading-how-to-escape-vibe-coding-hell">How to Escape Vibe Coding Hell</h2>
<h3 id="heading-avoid-these-habits">❌ Avoid These Habits</h3>
<ul>
<li><p>Letting AI write entire code blocks</p>
</li>
<li><p>Using AI agents to complete learning projects</p>
</li>
<li><p>Copy-pasting without understanding</p>
</li>
</ul>
<h3 id="heading-use-ai-this-way">✅ Use AI This Way</h3>
<ul>
<li><p><strong>For explanations, not solutions</strong>: "Explain how this works" not "Write the code"</p>
</li>
<li><p><strong>For debugging help</strong>: Show your code and ask what's wrong</p>
</li>
<li><p><strong>With challenging prompts</strong>: "Ask me three questions before answering"</p>
</li>
</ul>
<h3 id="heading-the-golden-rule-embrace-the-struggle">🔥 The Golden Rule: Embrace the Struggle</h3>
<p>Real learning feels uncomfortable. When you're stuck and frustrated, that's when your brain grows.</p>
<p><strong>Remember:</strong> Only coding yourself teaches you to code.</p>
<h2 id="heading-your-action-plan">Your Action Plan</h2>
<ol>
<li><p><strong>Type every line yourself</strong>: Even if AI suggests it</p>
</li>
<li><p><strong>Break things on purpose</strong>: Then fix them</p>
</li>
<li><p><strong>Ask "why"</strong>: Understand every line of code</p>
</li>
<li><p><strong>Build without AI</strong>: Try one project completely alone</p>
</li>
<li><p><strong>Use AI as a tutor, not a crutch</strong>: Ask questions, don't request solutions</p>
</li>
</ol>
<p>Real developers aren't the ones who can prompt AI best. They're the ones who understand how their code actually works.</p>
]]></content:encoded></item><item><title><![CDATA[Why I Can't Stop Talking About Claude Code]]></title><description><![CDATA[If we’ve talked lately, you’ve probably heard me rave about Claude Code. What started as just another coding tool has basically turned into my second brain.
I take all my notes in Bear (it’s like Notion, but simpler). A few months ago, I realised som...]]></description><link>https://mahdix.com/why-i-cant-stop-talking-about-claude-code</link><guid isPermaLink="true">https://mahdix.com/why-i-cant-stop-talking-about-claude-code</guid><category><![CDATA[claude-code]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Wed, 05 Nov 2025 10:39:35 GMT</pubDate><content:encoded><![CDATA[<p>If we’ve talked lately, you’ve probably heard me rave about <strong>Claude Code</strong>. What started as just another coding tool has basically turned into my second brain.</p>
<p>I take all my notes in <strong>Bear</strong> (it’s like Notion, but simpler). A few months ago, I realised something big: since my notes are just text, why not let Claude Code work with them?</p>
<p>Now I’ve got Claude helping me take notes, organise research, and think through ideas. I even set up a little home server so I can access it from my phone.</p>
<h3 id="heading-what-makes-claude-code-so-good">What Makes Claude Code So Good</h3>
<p>People often ask why I’m so obsessed with it. Here’s why it stands out:</p>
<p><strong>1. It speaks “Unix.”</strong><br />Unix is the language of the command line; those tiny, powerful tools developers use to make computers do things. Claude Code knows how to use them really well. While most AI tools get lost in complicated tasks, Claude just connects simple commands and gets the job done fast.</p>
<p><strong>2. It actually remembers stuff.</strong><br />This is huge. Most AI chat tools (like ChatGPT or Claude in the browser) forget everything after each chat. Claude Code doesn’t. It can save files, read them later, and build on past work.</p>
<p>Think of it like this: regular AI tools are geniuses with short-term memory loss. Claude Code is a smart coworker who takes notes and remembers what you did yesterday.</p>
<h3 id="heading-the-big-idea">The Big Idea</h3>
<p>Claude Code shows that sometimes, it’s not about making AI <em>smarter</em>; it’s about giving it better tools. By letting it use a filesystem and basic Unix commands, you unlock a whole new level of ability that’s been hiding there all along.</p>
<p>We’re just getting started with what’s possible.</p>
]]></content:encoded></item><item><title><![CDATA[The Shocking Truth: How 3% of Social Media Users Control What Everyone Thinks]]></title><description><![CDATA[You've heard social media causes problems. You've seen confusing studies. But researchers have been looking at the wrong thing the whole time.
A Tiny Group Controls Everything
While experts argue about whether social media divides people, something b...]]></description><link>https://mahdix.com/the-shocking-truth-how-3-of-social-media-users-control-what-everyone-thinks</link><guid isPermaLink="true">https://mahdix.com/the-shocking-truth-how-3-of-social-media-users-control-what-everyone-thinks</guid><category><![CDATA[social media]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 13 Oct 2025 10:16:04 GMT</pubDate><content:encoded><![CDATA[<p>You've heard social media causes problems. You've seen confusing studies. But researchers have been looking at the wrong thing the whole time.</p>
<h2 id="heading-a-tiny-group-controls-everything">A Tiny Group Controls Everything</h2>
<p>While experts argue about whether social media divides people, something bigger is happening: a tiny group of users has figured out how to control what goes viral and change how we all think.</p>
<p>The numbers are crazy. Only <strong>3% of accounts create one third of all posts</strong>. Just <strong>10% of users write 97% of posts about current events</strong>. These aren't normal users; they're influencers and content creators who discovered the secret to going viral: making people angry.</p>
<p>Studies show that every angry or emotional word in a post makes it 20% more likely to spread. Posts about anger and disgust spread the fastest. Why? Our brains are wired to pay attention to danger, and these creators have learned to use this against us.</p>
<h2 id="heading-you-cant-escape-it">You Can't Escape It</h2>
<p>The scary part is, even if you don't use social media, it still affects you. Studies found that older people who barely use social media changed their behaviour the most. How?</p>
<p>Your friends who use social media influence you. TV news repeats viral stories. Everyone talks about what's trending online. The effects spread through society like a virus.</p>
<p>When researchers paid people to quit social media for six weeks, it barely changed their views. Why? Everyone around them was still using it, still influenced by it, still talking about what they saw online.</p>
<h2 id="heading-how-anger-goes-viral">How Anger Goes Viral</h2>
<p>Studies in Germany, Italy, and Russia all show the same thing: when more people in an area use social media, extreme things happen more often more protests, more hate crimes, more people voting for extreme candidates.</p>
<p>One study found that normal increases in social media use matched up with 32% more hate crimes. Another showed that 10% more users in an area made protests almost 5% more likely.</p>
<h2 id="heading-the-fake-reality-problem">The Fake Reality Problem</h2>
<p>Social media creates what researchers call a "fake shared understanding." We think people around us are way more extreme than they really are. Surveys show most people are pretty normal and don't care much about politics,but social media makes the angry voices louder, so it seems like everyone is outraged.</p>
<p>The dangerous loop: when we think others are extreme, we become extreme too. Studies show people who see angry comments start writing angry comments themselves. Users copy the anger they see in their feeds. We're teaching each other to be angrier.</p>
<h2 id="heading-why-experts-got-it-wrong">Why Experts Got It Wrong</h2>
<p>Old research looked at whether different groups hate each other more. But social media's real damage is bigger it makes everyone angrier, more scared, and more tribal, no matter what they believe.</p>
<p>The system is designed this way: attention equals money. Content creators get paid based on how much people interact with their posts. And nothing gets more clicks than fear, anger, and outrage. We built a system that pays people to make us feel terrible.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>Social media isn't just showing us reality: it's warping reality while making rich a tiny group of people who've mastered fake outrage.</p>
<p>Studies from many countries agree. A small group of people who post constantly, chasing money and attention, is changing how we see each other and how we act.</p>
<p>And this is just the beginning.</p>
]]></content:encoded></item><item><title><![CDATA[The One Management Skill That Changes Everything]]></title><description><![CDATA[You’re going to mess up as a manager.
You’ll say the wrong thing.Make a bad call.Lose your cool.Forget a promise.
That’s not failure; it’s human.
The real question is: what do you do after you mess up?
👶 The Lesson That Changed Everything for Me
I r...]]></description><link>https://mahdix.com/the-one-management-skill-that-changes-everything</link><guid isPermaLink="true">https://mahdix.com/the-one-management-skill-that-changes-everything</guid><category><![CDATA[management]]></category><category><![CDATA[teamwork]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Tue, 07 Oct 2025 08:57:27 GMT</pubDate><content:encoded><![CDATA[<p>You’re going to mess up as a manager.</p>
<p>You’ll say the wrong thing.<br />Make a bad call.<br />Lose your cool.<br />Forget a promise.</p>
<p>That’s not failure; it’s <em>human</em>.</p>
<p>The real question is: <strong>what do you do after you mess up?</strong></p>
<p>👶 <strong>The Lesson That Changed Everything for Me</strong></p>
<p>I read a parenting book called <em>Good Inside</em>.<br />One idea blew my mind:</p>
<p>Being a great parent isn’t about being perfect<br />it’s about <strong>repair</strong>.</p>
<p>You mess up → you go back → you make it right.</p>
<p>That’s exactly what great managers do.</p>
<p>Your worst boss wasn’t the one who made mistakes.<br />It was the one who <em>never admitted them.</em></p>
<p>⚖️ <strong>The Fork in the Road</strong></p>
<p>You promise a tight deadline without asking your team.<br />They grind.<br />Stay late.<br />Burn out.</p>
<p>Now you’ve got two choices:</p>
<p>🚫 <strong>Bad manager:</strong> Pretends nothing happened. Trust fades.<br />✅ <strong>Good manager:</strong> Says,</p>
<blockquote>
<p>“I messed up. I should’ve checked with you first. I won’t do that again.”</p>
</blockquote>
<p>That simple honesty <em>builds</em> trust.</p>
<p>🔧 <strong>How to Repair</strong></p>
<p>1️⃣ <strong>Be specific.</strong><br />Don’t say “sorry! things got messy.” Say what you actually did.</p>
<p>2️⃣ <strong>Own the impact.</strong><br />No excuses. Focus on how it affected them.</p>
<p>3️⃣ <strong>Show change.</strong><br />A repeated mistake isn’t a mistake anymore.</p>
<p>4️⃣ <strong>Be patient.</strong><br />One apology helps, consistency heals.</p>
<p>🔥 <strong>Why It Matters</strong></p>
<p>When you know you can fix mistakes, you stop fearing them.</p>
<p>You move faster.<br />Lead braver.<br />Show up more human and not a robot.</p>
<p>Management isn’t about perfection.<br />It’s about <em>trust, repair, and growth.</em></p>
<blockquote>
<p>You’ll get it wrong.<br />You’ll fix it.<br />You’ll get better.</p>
</blockquote>
<p>That’s the skill that changes everything. 💙</p>
]]></content:encoded></item><item><title><![CDATA[Your Computer Is Lying to You]]></title><description><![CDATA[The CPU Utilization Scam They Don’t Tell You About
You’ve seen it: that smug little number in Task Manager or Activity Monitor. “Don’t worry, I’m only at 50% CPU utilization!”
Yeah… about that. It’s lying to your face.
The Experiment That Blew the Li...]]></description><link>https://mahdix.com/your-computer-is-lying-to-you</link><guid isPermaLink="true">https://mahdix.com/your-computer-is-lying-to-you</guid><category><![CDATA[cpu]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 29 Sep 2025 13:45:24 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-the-cpu-utilization-scam-they-dont-tell-you-about">The CPU Utilization Scam They Don’t Tell You About</h2>
<p>You’ve seen it: that smug little number in Task Manager or Activity Monitor. <em>“Don’t worry, I’m only at 50% CPU utilization!”</em></p>
<p>Yeah… about that. <strong>It’s lying to your face.</strong></p>
<h2 id="heading-the-experiment-that-blew-the-lid-off">The Experiment That Blew the Lid Off</h2>
<p>One curious engineer decided to put their AMD Ryzen through the digital equivalent of Navy SEAL training using a tool called <code>stress-ng</code> (translation: “let’s break the CPU and see what happens”).</p>
<p>They wanted to know: if the CPU meter says 50%, does that really mean the chip is working at half its capacity?</p>
<p>Short answer: <strong>nope.</strong><br />Long answer: <strong>your CPU is gaslighting you.</strong></p>
<h2 id="heading-the-numbers-that-dont-add-up">The Numbers That Don’t Add Up</h2>
<p>Here’s what’s really going on when your CPU swears it’s “half busy”:</p>
<ul>
<li><p><strong>Regular tasks</strong>: already chewing through ~60 -65% of what the chip can handle.</p>
</li>
<li><p><strong>Math with integers</strong>: jumps to ~65 to 85%.</p>
</li>
<li><p><strong>Matrix math (heavy-duty number crunching)</strong>: up to 100%.</p>
</li>
</ul>
<p>So yeah, your computer can claim it’s “only halfway there” while secretly sweating like a student pulling an all-nighter before finals.</p>
<h2 id="heading-the-dirty-tricks-behind-the-lie">The Dirty Tricks Behind the Lie</h2>
<p>Two big reasons your CPU meter is about as trustworthy as a toddler with chocolate on their face:</p>
<p><strong>1. Hyper-threading: The Fake Roommate</strong><br />Your CPU pretends it has more cores than it really does by splitting them into “threads.” Imagine 12 bathrooms. Great! Then it says, <em>“Surprise, here are 12 more bathrooms!”</em> Except those extra ones are just people sharing the original bathrooms. When workloads pile up, half those “extra” cores are mostly waiting in line.</p>
<p><strong>2. Turbo Mode: The Bait and Switch</strong><br />When only a few cores are active, they can sprint at full turbo speed (say, 4.9 GHz). Fire up all the cores, and suddenly everyone slows down to 4.3 GHz to avoid overheating. That means your CPU meter is counting cycles, but the cycles themselves keep shrinking in value. It’s like measuring distance while someone keeps changing the length of your ruler.</p>
<h2 id="heading-how-to-outsmart-the-lie">How to Outsmart the Lie</h2>
<p>Stop blindly trusting CPU percentages. Instead:</p>
<p>✅ <strong>Benchmark your system</strong> — test how much work it can <em>really</em> do under load.<br />✅ <strong>Measure completed work</strong>, not just utilization.<br />✅ <strong>Compare against the real max</strong>, not the fairy tale number your CPU reports.</p>
<h2 id="heading-tldr">TL;DR</h2>
<p>CPU utilization numbers are about as honest as someone saying <em>“I’m fine”</em> when they’re clearly not.</p>
<p>If you trust that 50% figure, you’ll end up underestimating how close you are to the edge. Measure <strong>real performance</strong>, not just utilisation, and your future self will thank you.</p>
]]></content:encoded></item><item><title><![CDATA[From Stone Tools to TikTok: Explore the Historical Tech Tree]]></title><description><![CDATA[Ever wondered how we went from smashing rocks together to doomscrolling on TikTok? Now you don’t just have to wonder. That’s because Étienne Fortier-Dubois has built something jaw-dropping: The Historical Tech Tree.
This isn’t your dusty high school ...]]></description><link>https://mahdix.com/from-stone-tools-to-tiktok-explore-the-historical-tech-tree</link><guid isPermaLink="true">https://mahdix.com/from-stone-tools-to-tiktok-explore-the-historical-tech-tree</guid><category><![CDATA[technology]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 22 Sep 2025 08:35:00 GMT</pubDate><content:encoded><![CDATA[<p>Ever wondered how we went from smashing rocks together to doomscrolling on TikTok? Now you don’t just have to wonder. That’s because Étienne Fortier-Dubois has built something jaw-dropping: <a target="_blank" href="https://www.historicaltechtree.com/"><strong>The Historical Tech Tree</strong></a>.</p>
<p>This isn’t your dusty high school timeline of inventions. It’s a massive, living map of <strong>1,780 technologies</strong> spanning <strong>3.3 million years</strong>, all stitched together with over <strong>2,000 connections</strong>. Imagine seeing the wheel evolve into cars, or ancient glass-making techniques laying the groundwork for your smartphone screen. It’s not a list. it’s a web. A story. A chain reaction of human creativity.</p>
<p>What makes it even cooler? He is building it in public. He’s tweaking, refining, and expanding it based on feedback. Every time you check in, the map has grown a little bit more. like technology itself.</p>
<p>The best part is how approachable it feels. Instead of staring at today’s mind-bending tech and thinking, <em>“How on earth did we get here?”</em>, you can trace the path step by step. It’s history, but alive, and it reminds you that every breakthrough, no matter how tiny, can spark the next revolution.</p>
<p>Head to <a target="_blank" href="http://historicaltechtree.com"><strong>historicaltechtree.com</strong></a> to see this for yourself. but once you dive in, you may not resurface for hours.</p>
]]></content:encoded></item><item><title><![CDATA[Why AI Still Can’t Build Software]]></title><description><![CDATA[As a developer, in addition to writing code myself, I’ve spent a lot of time watching developers work, and one thing is clear: people and AI don’t code the same way.
How We Think
Good programmers follow a kind of loop:

They figure out what they need...]]></description><link>https://mahdix.com/why-ai-still-cant-build-software</link><guid isPermaLink="true">https://mahdix.com/why-ai-still-cant-build-software</guid><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Fri, 19 Sep 2025 12:34:03 GMT</pubDate><content:encoded><![CDATA[<p>As a developer, in addition to writing code myself, I’ve spent a lot of time watching developers work, and one thing is clear: people and AI don’t code the same way.</p>
<h3 id="heading-how-we-think">How We Think</h3>
<p>Good programmers follow a kind of loop:</p>
<ol>
<li><p>They figure out what they need to build. Not just “make a login page,” but how it fits with everything else (the big picture), what could break, and what users expect.</p>
</li>
<li><p>They write code to do that task.</p>
</li>
<li><p>They step back and check what their code actually does. Not what they <em>hoped</em> it would do.</p>
</li>
<li><p>They spot the gap between what they wanted and what they got, then fix it.</p>
</li>
</ol>
<p>The key here is that developers can hold multiple versions of reality in their heads. They know what the goal is, what the code currently does, and how the two compare.</p>
<h3 id="heading-where-ai-struggles">Where AI Struggles</h3>
<p>AI is great at spitting out code fast. It can read your codebase, add tests, even write logs. But it can’t keep track of what’s really happening.</p>
<p>It’s like a brilliant but forgetful friend. They’ll write code and assume it’s perfect. When tests fail, they guess randomly is the test wrong, the code wrong, or something else? And if things get too messy, they just start over.</p>
<p>That’s the opposite of real programming. Humans pause, think, test, and adjust. They know when to rewrite and when to debug deeper, AI doesn’t.</p>
<h3 id="heading-will-ai-improve-soon">Will AI Improve Soon?</h3>
<p>Tens of billions of pounds are being poured into Startups all trying to solve this problem. So, probably, but not just by making models bigger.</p>
<p>When coding, humans don’t keep everything in their head at once. They zoom in on details, then zoom out to the big picture, and juggle priorities. AI doesn’t do this well. Instead, it:</p>
<ul>
<li><p>Misses obvious but unstated things.</p>
</li>
<li><p>Forgets details from earlier.</p>
</li>
<li><p>Makes up fake features or errors.</p>
</li>
</ul>
<p>These aren’t minor issues. To really build software, you need to track what you want, what you have, and what to change. You need to have a mental model of your code as a whole, be able to divide it into smaller parts while connecting the dots between them. AI just isn’t there yet.</p>
<h3 id="heading-so-what-now">So What Now?</h3>
<p>AI is still very useful. It’s like a super-fast intern who never gets tired and knows every language. For niche, clear tasks, it’s amazing. For example if you want to write a client for AWS SES to use and send emails in your app, it’s just a matter of seconds to have it.</p>
<p>But for bigger problems: debugging, design, or real problem-solving you’re still in charge. AI can write the code, but you need to make sure it’s correct and actually does the job. I would say, you are the driver and AI is the car. You can use it however you want to speed up and deliver the output much quicker, but at the end of the day, the car can’t drive itself.</p>
<p>The future will likely be humans and AI working together, with humans leading. AI is a strong tool, but it’s still just a tool. You wouldn’t let your text editor design your app, and you shouldn’t let AI either.</p>
<p>At least not yet.</p>
]]></content:encoded></item><item><title><![CDATA[Neural Networks 101: The Backbone of GPT and LLMs Explained]]></title><description><![CDATA[Have you ever wondered what powers the remarkable conversations and creative outputs of tools like ChatGPT and other large language models (LLMs)?
At the heart of this technological revolution lies an intricate web of calculations known as neural net...]]></description><link>https://mahdix.com/neural-networks-101-the-backbone-of-gpt-and-llms-explained</link><guid isPermaLink="true">https://mahdix.com/neural-networks-101-the-backbone-of-gpt-and-llms-explained</guid><category><![CDATA[llm]]></category><category><![CDATA[gpt]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Mon, 01 Sep 2025 16:33:29 GMT</pubDate><content:encoded><![CDATA[<p>Have you ever wondered what powers the remarkable conversations and creative outputs of tools like ChatGPT and other large language models (LLMs)?</p>
<p>At the heart of this technological revolution lies an intricate web of calculations known as neural networks. These complex systems mimic the human brain's architecture, enabling computers to learn from vast amounts of data in ways we once thought were exclusive to humans. In this blog post, we'll embark on a fascinating journey into the realm of neural networks. Whether you're a tech enthusiast or simply curious about how today’s AI marvels work behind the scenes, join us as we break down these powerful frameworks and unveil their essential role in shaping intelligent systems that are redefining our interaction with technology!</p>
<h2 id="heading-the-building-blocks-understanding-neural-networks">The Building Blocks: Understanding Neural Networks</h2>
<p>Neural networks are computational models inspired by the biological neural networks that constitute animal brains. Just as our brains consist of billions of interconnected neurons that process and transmit information, artificial neural networks are composed of layers of interconnected nodes, or "artificial neurons," that work together to recognise patterns and make predictions.</p>
<p>Each connection between nodes has a weight that determines how much influence one node has on another. Through a process called training, these weights are adjusted based on the data the network encounters, allowing it to learn and improve its performance over time. This learning process is remarkably similar to how we humans learn from experience, gradually refining our understanding through repeated exposure to information.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756744176749/0273790c-a022-4945-b395-8789cb9638cd.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-transformer-revolution-architecture-that-changed-everything">The Transformer Revolution: Architecture That Changed Everything</h2>
<p>The specific type of neural network architecture that powers GPT and most modern LLMs is called a transformer. Introduced in 2017, transformers revolutionised natural language processing by introducing a mechanism called "attention" that allows the model to focus on different parts of the input text when generating each word in its response.</p>
<p>Unlike earlier neural network architectures that processed text sequentially, transformers can examine all parts of a sentence simultaneously, understanding relationships between words regardless of their distance from each other. This parallel processing capability, combined with the attention mechanism, enables transformers to capture complex linguistic patterns, understand context across long passages, and generate coherent, contextually appropriate responses that seem almost human-like in their sophistication.</p>
<h2 id="heading-scale-matters-from-networks-to-large-language-models">Scale Matters: From Networks to Large Language Models</h2>
<p>What transforms a neural network into a large language model is primarily a matter of scale and training methodology. Modern LLMs like ChatGPT contain trillions of parameters, which are the adjustable weights and biases within the neural network.</p>
<p>These massive models are trained on enormous datasets containing text from books, articles, websites, and other sources, learning to predict the next word in a sequence based on the preceding context. Through this seemingly simple task of next-word prediction, the neural network develops an understanding of grammar, facts, reasoning patterns, and even creative expression.</p>
<p>The "large" in large language models refers not only to the number of parameters but also to the computational resources required to train and run these systems, often requiring specialised hardware and significant energy consumption.</p>
<h2 id="heading-whate-next">What’e next?</h2>
<p>I will write more about LLMs, how they work and their applications. Follow me to stay up to date!</p>
]]></content:encoded></item><item><title><![CDATA[Stable Diffusion Explained]]></title><description><![CDATA[Stable Diffusion is a latent diffusion model (LDM) that generates images from text prompts using a process of iterative denoising. It operates in a compressed latent space rather than pixel space, making it more efficient while maintaining high-quali...]]></description><link>https://mahdix.com/stable-diffusion-explained</link><guid isPermaLink="true">https://mahdix.com/stable-diffusion-explained</guid><category><![CDATA[stable diffusion]]></category><dc:creator><![CDATA[Mahdi M.]]></dc:creator><pubDate>Fri, 21 Feb 2025 10:39:06 GMT</pubDate><content:encoded><![CDATA[<p>Stable Diffusion is a latent diffusion model (LDM) that generates images from text prompts using a process of iterative denoising. It operates in a compressed latent space rather than pixel space, making it more efficient while maintaining high-quality image synthesis. The model is based on deep learning techniques, particularly <a target="_blank" href="https://en.wikipedia.org/wiki/Variational_autoencoder">variational autoencoders</a> (VAEs), <a target="_blank" href="https://en.wikipedia.org/wiki/U-Net">U-Net</a> architectures, and text encoders, typically using <a target="_blank" href="https://en.wikipedia.org/wiki/Contrastive_Language-Image_Pre-training">CLIP</a> (Contrastive Language–Image Pretraining) to interpret textual descriptions.</p>
<p>The core principle of diffusion models is to gradually remove noise from an initially random input. Stable Diffusion first encodes an image into a lower-dimensional latent space using a VAE, reducing computational complexity. Then, a <a target="_blank" href="https://en.wikipedia.org/wiki/U-Net">U-Net</a>, conditioned on a text prompt processed by a <a target="_blank" href="https://en.wikipedia.org/wiki/Contrastive_Language-Image_Pre-training">CLIP text encoder</a>, learns to predict and remove noise step by step. This denoising process is governed by a stochastic differential equation, which enables the generation of coherent and visually meaningful images from pure noise.</p>
<p>A key innovation in Stable Diffusion is its ability to operate in the latent space instead of the pixel space, making it significantly more efficient than earlier diffusion models like DALL·E 2 or Imagen. By applying denoising steps in a compressed latent representation, the model reduces the computational cost while preserving image fidelity. This allows it to run on consumer GPUs with as little as 4GB of VRAM, making high-quality image generation accessible to a wider audience.</p>
<p>Text conditioning in Stable Diffusion is achieved using a cross-attention mechanism that aligns textual features with visual representations. The CLIP text encoder translates text into a latent representation that guides the image generation process. This enables users to create highly specific outputs by modifying prompts with keywords, weights, or negative prompts to influence style, composition, and details. The cross-attention layers in the U-Net allow the model to dynamically adjust image features based on the input prompt.</p>
<p>it has numerous applications, including digital art, concept design, and image inpainting. Additionally, its open-source nature allows developers to fine-tune the model for specific tasks, such as generating medical images or architectural designs. Despite its advantages, the model also raises ethical concerns, such as bias in training data and potential misuse for deepfake generation. Researchers continue to explore ways to improve safety, controllability, and fairness in AI-generated content.</p>
<p>You can learn more about Stable Diffusion here: <a target="_blank" href="https://sdtools.org/">https://sdtools.org/</a></p>
<p>P.S: Took a while :-)</p>
]]></content:encoded></item></channel></rss>