<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI on David R. Longnecker - Converting Coffee to Code</title><link>https://drlongnecker.com/categories/ai/</link><description>Recent content in AI on David R. Longnecker - Converting Coffee to Code</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 29 Apr 2026 09:00:00 -0600</lastBuildDate><atom:link href="https://drlongnecker.com/categories/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>The New Front Door</title><link>https://drlongnecker.com/blog/2026/04/agent-facing-product-design-api-first-interface/</link><pubDate>Wed, 29 Apr 2026 09:00:00 -0600</pubDate><guid>https://drlongnecker.com/blog/2026/04/agent-facing-product-design-api-first-interface/</guid><description>&lt;p&gt;When mobile emerged as a serious product surface, a lot of teams, my own included, responded by making their web app text smaller. It wasn&amp;rsquo;t laziness; &amp;ldquo;Mobile&amp;rdquo; just felt like a variant of what already existed, not a different interface. That misread cost some teams years.&lt;/p&gt;
&lt;p&gt;Agents are the next version of that mistake.&lt;/p&gt;
&lt;p&gt;The transition from web to mobile required rethinking interaction from scratch. Touch instead of hover. Persistent context instead of long sessions. Teams treating mobile as &amp;ldquo;the web, but smaller&amp;rdquo; shipped products that technically worked and practically failed. The teams that asked &amp;ldquo;how does a mobile user actually behave?&amp;rdquo; built something different.&lt;/p&gt;</description></item><item><title>The Dashboard Nobody Believes</title><link>https://drlongnecker.com/blog/2026/04/developer-metrics-comprehension-gap-ai-measurement/</link><pubDate>Mon, 27 Apr 2026 09:00:00 -0600</pubDate><guid>https://drlongnecker.com/blog/2026/04/developer-metrics-comprehension-gap-ai-measurement/</guid><description>&lt;p&gt;DORA metrics (deployment frequency, lead time for changes, change failure rate, mean time to recovery) were a genuine improvement over what came before. Measuring story points and PR counts felt like measuring motion rather than progress, and the &lt;a href="https://dora.dev/guides/dora-metrics/"&gt;DORA framework&lt;/a&gt; shifted attention toward delivery outcomes that actually mattered.&lt;/p&gt;</description></item><item><title>The Code Nobody Understands</title><link>https://drlongnecker.com/blog/2026/04/cognitive-debt-developer-comprehension-ai-systems/</link><pubDate>Mon, 20 Apr 2026 09:00:00 -0600</pubDate><guid>https://drlongnecker.com/blog/2026/04/cognitive-debt-developer-comprehension-ai-systems/</guid><description>&lt;p&gt;The abstraction layer keeps moving. That&amp;rsquo;s the simplest framing of what&amp;rsquo;s happening in software development right now, and it helps explain why the usual metrics for team health aren&amp;rsquo;t keeping up.&lt;/p&gt;
&lt;p&gt;Most developers stopped thinking about assembly code decades ago. The compiled languages we use do that work, and we&amp;rsquo;ve accepted (correctly) that not understanding register allocation is a fine trade for working at a higher level of abstraction. Later, managed languages removed memory management from the mental model. Every time an abstraction layer stabilized, the developers working above it could focus on the new problems at the new level, rather than the already-solved ones below.&lt;/p&gt;
&lt;p&gt;In recent months, AI is doing it again. The level we&amp;rsquo;re moving toward isn&amp;rsquo;t &amp;ldquo;write code that does X&amp;rdquo;. It&amp;rsquo;s &amp;ldquo;express intent clearly enough that generated code does X.&amp;rdquo; The abstraction is real and the productivity gains are real. What&amp;rsquo;s less visible is what the gap looks like when that layer doesn&amp;rsquo;t hold&amp;ndash;when the &amp;ldquo;vibe&amp;rdquo; fails.&lt;/p&gt;</description></item><item><title>Learning as a Superpower: Deep Understanding with AI</title><link>https://drlongnecker.com/blog/2025/05/learning-as-superpower-deep-understanding-age-of-ai/</link><pubDate>Sun, 25 May 2025 09:00:00 -0600</pubDate><guid>https://drlongnecker.com/blog/2025/05/learning-as-superpower-deep-understanding-age-of-ai/</guid><description>&lt;h2 id="the-paradox-of-abundant-knowledge"&gt;The Paradox of Abundant Knowledge&lt;/h2&gt;
&lt;p&gt;This month, as I&amp;rsquo;ve been working with several new teams, the concepts of learning and accelerating understanding have been at the forefront of my mind—particularly how AI fits into everyone&amp;rsquo;s learning toolkit. As a seasoned technical leader who&amp;rsquo;s witnessed multiple paradigm shifts over the past 30 years, I&amp;rsquo;ve noticed a curious paradox emerging: as AI tools make information more accessible than ever, &lt;strong&gt;deep understanding becomes increasingly valuable&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Understanding Agent-Based vs. Agentic AI</title><link>https://drlongnecker.com/blog/2025/05/agent-based-vs-agentic-ai/</link><pubDate>Mon, 19 May 2025 09:00:00 -0600</pubDate><guid>https://drlongnecker.com/blog/2025/05/agent-based-vs-agentic-ai/</guid><description>&lt;p&gt;In press and marketing, terms like &amp;ldquo;agent-based&amp;rdquo; and &amp;ldquo;agentic&amp;rdquo; are often used interchangeably, creating confusion even among seasoned technologists. This distinction isn&amp;rsquo;t just semantic nitpicking—understanding the difference has profound implications for how we build, deploy, and govern AI systems.&lt;/p&gt;
&lt;p&gt;Gartner has officially named &lt;strong&gt;agentic AI&lt;/strong&gt; as the top strategic technology trend for 2025, predicting that &amp;ldquo;by 2028, at least 15 percent of day-to-day work decisions will be made autonomously through agentic AI, up from 0 percent in 2024&amp;rdquo; (&lt;a href="https://internationalbanker.com/technology/will-2025-be-the-year-of-the-ai-agents/"&gt;International Banker, 2025&lt;/a&gt;).&lt;/p&gt;</description></item><item><title>Hitting the Bullseye: Accuracy vs. Precision in Model Tuning</title><link>https://drlongnecker.com/blog/2025/05/accuracy-precision-ai-model-tuning/</link><pubDate>Sun, 11 May 2025 09:00:00 -0600</pubDate><guid>https://drlongnecker.com/blog/2025/05/accuracy-precision-ai-model-tuning/</guid><description>&lt;p&gt;When tuning AI models, particularly Retrieval-Augmented Generation (RAG) systems, many teams focus either overindex or confuse accuracy and precision. This fundamental misunderstanding leads to systems that either hit near the mark occasionally but scatter wildly in practice, or consistently miss the mark (or hallucinate) in the same way repeatedly.&lt;/p&gt;
&lt;p&gt;Understanding the difference between these concepts isn&amp;rsquo;t academic&amp;ndash;it directly impacts how effectively your generative AI systems serves users and delivers business value. RAG systems, which enhance large language models with external knowledge sources, are particularly sensitive to these distinctions.&lt;/p&gt;</description></item></channel></rss>