<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Pavel Sanikovich — Engineering Notes]]></title><description><![CDATA[Practical mental models for engineers building real systems.
Deep dives into concurrency, distributed systems, performance, and backend architecture — beyond tu]]></description><link>https://blog.devflex.pro</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 21:10:04 GMT</lastBuildDate><atom:link href="https://blog.devflex.pro/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Why Most Go Performance Advice Is Outdated (Go 1.25 Edition)]]></title><description><![CDATA[A lot of Go performance advice still circulating today was not born wrong.
It was born early.
Most of it emerged when the Go runtime was younger, the compiler less aggressive, and the garbage collector far more sensitive to allocation patterns. Over ...]]></description><link>https://blog.devflex.pro/why-most-go-performance-advice-is-outdated-go-125-edition</link><guid isPermaLink="true">https://blog.devflex.pro/why-most-go-performance-advice-is-outdated-go-125-edition</guid><category><![CDATA[Go Language]]></category><category><![CDATA[performance]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Pavel Sanikovich]]></dc:creator><pubDate>Sat, 10 Jan 2026 16:29:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768062997722/7baf2402-fc9f-4ac5-b1fc-5884df98213c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A lot of Go performance advice still circulating today was not born wrong.</p>
<p>It was born <strong>early</strong>.</p>
<p>Most of it emerged when the Go runtime was younger, the compiler less aggressive, and the garbage collector far more sensitive to allocation patterns. Over time, the runtime evolved — but the advice stayed frozen.</p>
<p>This article is not about replacing one list of rules with another.<br />It’s about <strong>verifying which intuitions still hold in modern Go</strong>, and which ones quietly stopped matching reality.</p>
<p>Instead of arguing, we’ll measure.</p>
<hr />
<h2 id="heading-allocation-vs-lifetime-the-core-misunderstanding">Allocation vs Lifetime: The Core Misunderstanding</h2>
<p>The most persistent performance instinct in Go is the fear of heap allocations.</p>
<p>The intuition feels solid: stack allocations are cheap, heap allocations are expensive, garbage collection is costly. Therefore, avoid heap allocations.</p>
<p>That logic collapses once you separate <strong>allocation cost</strong> from <strong>object lifetime</strong>.</p>
<p>In modern Go, allocating an object on the heap is usually cheap. Keeping it alive is not.</p>
<p>Let’s start with the simplest possible benchmark: allocating short-lived heap objects versus doing nothing special at all.</p>
<h3 id="heading-benchmark-short-lived-heap-allocation">Benchmark: Short-Lived Heap Allocation</h3>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"testing"</span>

<span class="hljs-keyword">var</span> sink <span class="hljs-keyword">int</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">allocShortLived</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span></span> {
    s := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n { <span class="hljs-comment">// modern: range over int</span>
        x := <span class="hljs-built_in">new</span>(<span class="hljs-keyword">int</span>)
        *x = i
        s += *x
    }
    sink = s <span class="hljs-comment">// escape to global to prevent elimination</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkShortLivedAlloc</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        allocShortLived(<span class="hljs-number">1024</span>)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">noAlloc</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span></span> {
    s := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        x := i
        s += x
    }
    sink = s
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkShortLived_NoAlloc</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        noAlloc(<span class="hljs-number">1024</span>)
    }
}
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benchmark</td><td>ns/op (range)</td><td>B/op</td><td>allocs/op</td></tr>
</thead>
<tbody>
<tr>
<td>ShortLivedAlloc (with new)</td><td>278–282</td><td>0</td><td>0</td></tr>
<tr>
<td>ShortLived_NoAlloc</td><td>277–279</td><td>0</td><td>0</td></tr>
</tbody>
</table>
</div><p>Despite using <code>new(int)</code>, the benchmark reports <strong>0 allocations per operation</strong>. This means the compiler was able to keep the allocated value on the stack (or eliminate the allocation entirely) because the pointer never escaped.</p>
<p>In modern Go, using pointers does not automatically imply heap allocation. Allocation location is a compiler decision based on escape analysis, not syntax.</p>
<hr />
<h2 id="heading-why-preallocation-became-a-cargo-cult">Why Preallocation Became a Cargo Cult</h2>
<p>Preallocating slices is one of the most common “optimizations” people apply reflexively.</p>
<p>The reasoning is simple: slice growth reallocates memory, reallocation is expensive, therefore we should always preallocate.</p>
<p>That reasoning breaks down when the preallocation guess is wrong — which is most of the time in real systems.</p>
<p>Let’s compare three cases: no preallocation, exact preallocation, and aggressive over-preallocation.</p>
<h3 id="heading-benchmark-slice-growth-strategies">Benchmark: Slice Growth Strategies</h3>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"testing"</span>

<span class="hljs-keyword">const</span> sliceN = <span class="hljs-number">256</span>

<span class="hljs-comment">// sinks prevent compiler from eliminating work.</span>
<span class="hljs-keyword">var</span> sinkSlice []<span class="hljs-keyword">int</span>
<span class="hljs-keyword">var</span> sinkInt <span class="hljs-keyword">int</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">buildNoPrealloc</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span> []<span class="hljs-title">int</span></span> {
    <span class="hljs-keyword">var</span> out []<span class="hljs-keyword">int</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n { 
        out = <span class="hljs-built_in">append</span>(out, i)
    }
    <span class="hljs-keyword">return</span> out
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">buildExactPrealloc</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span> []<span class="hljs-title">int</span></span> {
    out := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">int</span>, <span class="hljs-number">0</span>, n)
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        out = <span class="hljs-built_in">append</span>(out, i)
    }
    <span class="hljs-keyword">return</span> out
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">buildOverPrealloc</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span> []<span class="hljs-title">int</span></span> {
    <span class="hljs-comment">// Intentionally over-allocating to simulate a common "just in case" optimization.</span>
    out := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">int</span>, <span class="hljs-number">0</span>, n*<span class="hljs-number">16</span>)
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        out = <span class="hljs-built_in">append</span>(out, i)
    }
    <span class="hljs-keyword">return</span> out
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkSlices_NoPrealloc</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        s := buildNoPrealloc(sliceN)
        <span class="hljs-comment">// touch result so it can't be optimized away</span>
        sinkInt += <span class="hljs-built_in">len</span>(s)
        sinkSlice = s
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkSlices_ExactPrealloc</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        s := buildExactPrealloc(sliceN)
        sinkInt += <span class="hljs-built_in">len</span>(s)
        sinkSlice = s
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkSlices_OverPrealloc</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        s := buildOverPrealloc(sliceN)
        sinkInt += <span class="hljs-built_in">len</span>(s)
        sinkSlice = s
    }
}
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benchmark</td><td>ns/op (≈ typical)</td><td>B/op</td><td>allocs/op</td></tr>
</thead>
<tbody>
<tr>
<td>Slices_NoPrealloc (n=256)</td><td>~1050</td><td>4088</td><td>9</td></tr>
<tr>
<td>Slices_ExactPrealloc</td><td>~410</td><td>2048</td><td>1</td></tr>
<tr>
<td>Slices_OverPrealloc (x16)</td><td>~4500</td><td>32768</td><td>1</td></tr>
</tbody>
</table>
</div><p>Exact preallocation is the “good” case here: it cuts allocations from 9 to 1, halves B/op, and reduces runtime from ~1050 ns/op to ~410 ns/op. No prealloc triggers slice growth and multiple reallocations (9 allocs/op), which is exactly the kind of overhead preallocation is meant to avoid.</p>
<p>The interesting part is over-preallocation. It still performs only 1 allocation, but it allocates far more memory (32 KB/op) and becomes significantly slower (~4.5 µs/op typical). This doesn’t mean “over-prealloc is always bad” — it means that allocating far more capacity than you’ll use can increase memory traffic and hurt cache behavior, even when allocation count looks great.</p>
<p>In modern Go, allocation count alone is a poor proxy for performance — bytes allocated and object lifetime often matter more.</p>
<hr />
<h2 id="heading-interfaces-the-optimization-that-rarely-pays">Interfaces: The Optimization That Rarely Pays</h2>
<p>Another long-standing belief is that interface calls are inherently slow and should be avoided in performance-sensitive code.</p>
<p>This belief comes from a time when interface dispatch blocked inlining and added measurable overhead.</p>
<p>Modern compilers are far more capable.</p>
<p>Let’s measure the difference between concrete calls, interface calls, and generic dispatch.</p>
<h3 id="heading-benchmark-interface-vs-concrete-vs-generic">Benchmark: Interface vs Concrete vs Generic</h3>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"testing"</span>

<span class="hljs-keyword">type</span> Adder <span class="hljs-keyword">interface</span> {
    Add(<span class="hljs-keyword">int</span>) <span class="hljs-keyword">int</span>
}

<span class="hljs-keyword">type</span> impl <span class="hljs-keyword">struct</span> {
    base <span class="hljs-keyword">int</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(i impl)</span> <span class="hljs-title">Add</span><span class="hljs-params">(x <span class="hljs-keyword">int</span>)</span> <span class="hljs-title">int</span></span> {
    <span class="hljs-keyword">return</span> i.base + x
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">callConcrete</span><span class="hljs-params">(v impl, n <span class="hljs-keyword">int</span>)</span> <span class="hljs-title">int</span></span> {
    sum := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        sum += v.Add(i)
    }
    <span class="hljs-keyword">return</span> sum
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">callInterface</span><span class="hljs-params">(v Adder, n <span class="hljs-keyword">int</span>)</span> <span class="hljs-title">int</span></span> {
    sum := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        sum += v.Add(i)
    }
    <span class="hljs-keyword">return</span> sum
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">callGeneric</span>[<span class="hljs-title">T</span> <span class="hljs-title">interface</span></span>{ Add(<span class="hljs-keyword">int</span>) <span class="hljs-keyword">int</span> }](v T, n <span class="hljs-keyword">int</span>) <span class="hljs-keyword">int</span> {
    sum := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        sum += v.Add(i)
    }
    <span class="hljs-keyword">return</span> sum
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkConcrete</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    v := impl{base: <span class="hljs-number">10</span>}
    <span class="hljs-keyword">for</span> b.Loop() {
        _ = callConcrete(v, <span class="hljs-number">1024</span>)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkInterface</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    v := impl{base: <span class="hljs-number">10</span>}
    <span class="hljs-keyword">for</span> b.Loop() {
        _ = callInterface(v, <span class="hljs-number">1024</span>)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkGeneric</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    v := impl{base: <span class="hljs-number">10</span>}
    <span class="hljs-keyword">for</span> b.Loop() {
        _ = callGeneric(v, <span class="hljs-number">1024</span>)
    }
}
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benchmark</td><td>ns/op (≈ typical)</td><td>B/op</td><td>allocs/op</td></tr>
</thead>
<tbody>
<tr>
<td>Concrete call</td><td>~278</td><td>0</td><td>0</td></tr>
<tr>
<td>Interface call</td><td>~1645</td><td>0</td><td>0</td></tr>
<tr>
<td>Generic call</td><td>~1645</td><td>0</td><td>0</td></tr>
</tbody>
</table>
</div><p>This benchmark isolates dispatch overhead in a tight loop — no allocations, no I/O, no cache-heavy data structures. In this artificial setting, interface and generic dispatch are ~6× slower than a direct concrete call (~1.6 µs vs ~0.28 µs per 1024 iterations), while still producing <strong>0 allocs/op</strong>.</p>
<p>The point is not “interfaces are free” — they aren’t. The point is that this overhead is purely call-level and often disappears inside real workloads where time is dominated by memory access, synchronization, syscalls, or network I/O.</p>
<p>So the modern rule is: interfaces can cost something, but optimizing them is rarely the first place you get meaningful wins — measure before reshaping your APIs.</p>
<p>Note: this generic benchmark uses a method constraint, which may still compile to indirect calls depending on how the compiler specializes the instantiation.</p>
<hr />
<h2 id="heading-syncpool-when-the-cure-becomes-the-disease"><code>sync.Pool</code>: When the Cure Becomes the Disease</h2>
<p><code>sync.Pool</code> is often introduced as a way to “reduce GC pressure”.</p>
<p>That framing is misleading.</p>
<p><code>sync.Pool</code> is designed to improve throughput by opportunistically reusing short-lived objects. It is explicitly allowed to drop its contents at any time.</p>
<p>Let’s compare direct allocation with pooled reuse.</p>
<h3 id="heading-benchmark-allocation-vs-pool-reuse">Benchmark: Allocation vs Pool Reuse</h3>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"sync"</span>
    <span class="hljs-string">"testing"</span>
)

<span class="hljs-keyword">var</span> bufPool = sync.Pool{
    New: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-title">any</span></span> {
        b := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, <span class="hljs-number">32</span>*<span class="hljs-number">1024</span>)
        <span class="hljs-keyword">return</span> &amp;b
    },
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">allocBuffers</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span></span> {
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        b := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, <span class="hljs-number">32</span>*<span class="hljs-number">1024</span>)
        b[<span class="hljs-number">0</span>] = <span class="hljs-keyword">byte</span>(i)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">poolBuffers</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span></span> {
    <span class="hljs-keyword">for</span> i := <span class="hljs-keyword">range</span> n {
        p := bufPool.Get().(*[]<span class="hljs-keyword">byte</span>)
        b := *p
        b[<span class="hljs-number">0</span>] = <span class="hljs-keyword">byte</span>(i)
        bufPool.Put(p)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkAlloc</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        allocBuffers(<span class="hljs-number">128</span>)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkPool</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        poolBuffers(<span class="hljs-number">128</span>)
    }
}
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benchmark</td><td>ns/op (≈ typical)</td><td>B/op</td><td>allocs/op</td></tr>
</thead>
<tbody>
<tr>
<td>Alloc (make)</td><td>~41</td><td>0</td><td>0</td></tr>
<tr>
<td>Pool (Get/Put)</td><td>~1700</td><td>0</td><td>0</td></tr>
</tbody>
</table>
</div><p>In this benchmark, <code>sync.Pool</code> is dramatically slower than plain allocation: ~1.7 µs/op versus ~41 ns/op. The key detail is that both variants report <strong>0 allocs/op</strong>, which means the compiler was able to eliminate the allocation work in the “Alloc” case (and the pool path is mostly measuring <code>Get/Put</code> synchronization and bookkeeping overhead).</p>
<p>This is a good reminder that <code>sync.Pool</code> is not a universal “make things faster” switch. In microbenchmarks where allocations don’t actually hit the heap, a pool can easily be pure overhead.</p>
<p>This benchmark intentionally represents a case where allocations do not escape to the heap, making it a worst-case scenario for <code>sync.Pool</code>.</p>
<hr />
<h2 id="heading-retention-the-cost-that-actually-hurts">Retention: The Cost That Actually Hurts</h2>
<p>The most expensive performance bugs in modern Go are rarely about how fast memory is allocated.</p>
<p>They are about <strong>how long memory stays reachable</strong>.</p>
<p>Let’s compare two patterns: retaining large payloads vs extracting only what’s needed.</p>
<h3 id="heading-benchmark-memory-retention">Benchmark: Memory Retention</h3>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"testing"</span>

<span class="hljs-keyword">var</span> sink2 [][]<span class="hljs-keyword">byte</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">badRetention</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span> [][]<span class="hljs-title">byte</span></span> {
    out := <span class="hljs-built_in">make</span>([][]<span class="hljs-keyword">byte</span>, <span class="hljs-number">0</span>, n)
    <span class="hljs-keyword">for</span> <span class="hljs-keyword">range</span> n {
        b := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, <span class="hljs-number">64</span>*<span class="hljs-number">1024</span>)
        out = <span class="hljs-built_in">append</span>(out, b)
    }
    <span class="hljs-keyword">return</span> out
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">goodRetention</span><span class="hljs-params">(n <span class="hljs-keyword">int</span>)</span> [][]<span class="hljs-title">byte</span></span> {
    out := <span class="hljs-built_in">make</span>([][]<span class="hljs-keyword">byte</span>, <span class="hljs-number">0</span>, n)
    <span class="hljs-keyword">for</span> <span class="hljs-keyword">range</span> n {
        b := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, <span class="hljs-number">64</span>*<span class="hljs-number">1024</span>)
        out = <span class="hljs-built_in">append</span>(out, <span class="hljs-built_in">append</span>([]<span class="hljs-keyword">byte</span>(<span class="hljs-literal">nil</span>), b[:<span class="hljs-number">64</span>]...))
    }
    <span class="hljs-keyword">return</span> out
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkBadRetention</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        sink2 = badRetention(<span class="hljs-number">128</span>)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkGoodRetention</span><span class="hljs-params">(b *testing.B)</span></span> {
    b.ReportAllocs()
    <span class="hljs-keyword">for</span> b.Loop() {
        sink2 = goodRetention(<span class="hljs-number">128</span>)
    }
}
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benchmark</td><td>ns/op (≈ typical)</td><td>B/op</td><td>allocs/op</td></tr>
</thead>
<tbody>
<tr>
<td>BadRetention</td><td>~1.5 ms</td><td>~8.0 MB</td><td>129</td></tr>
<tr>
<td>GoodRetention</td><td>~90 µs</td><td>~11 KB</td><td>129</td></tr>
</tbody>
</table>
</div><p>Both benchmarks perform the same number of allocations (129 allocs/op). The difference is not how many objects are allocated, but how much memory they retain.</p>
<p>In the “bad retention” case, each iteration keeps references to large payloads, resulting in ~8 MB of live memory per operation and a runtime of ~1.5 ms/op. In the “good retention” case, the same number of allocations is performed, but only a small slice of each payload is retained, reducing live memory to ~11 KB and execution time to ~90 µs/op.</p>
<p>This is a ~16× difference in runtime with identical allocation counts.</p>
<p>This benchmark demonstrates why modern Go performance issues are rarely about allocation count. Both cases allocate the same number of objects, yet their performance differs by an order of magnitude. What matters is retention: how much memory stays reachable and for how long.</p>
<p>Reducing allocs/op without controlling object lifetime often optimizes the wrong thing.</p>
<hr />
<h2 id="heading-the-real-shift-in-modern-go-performance">The Real Shift in Modern Go Performance</h2>
<p>What changed by Go 1.25 is not a single feature or trick. The bigger shift is that many mechanical costs got cheaper, so architectural costs now dominate the profile.</p>
<p>Modern Go rewards designs with clear ownership and short-lived data. When lifetimes are explicit and concurrency is bounded, the runtime has much less “mess” to manage, and optimizations become predictable.</p>
<p>Old advice often assumed the runtime was fragile. In modern Go, the runtime is usually fine — it’s unclear lifetimes and accidental retention that break performance.</p>
<hr />
<h2 id="heading-closing-thought">Closing Thought</h2>
<p>If you optimize before measuring, you are probably optimizing a version of Go you are no longer running.</p>
<p>Let the runtime handle mechanics.</p>
<p>Your responsibility is to design systems whose data does not outlive its usefulness.</p>
]]></content:encoded></item><item><title><![CDATA[Stack vs Heap in Go: How Escape Analysis Actually Works]]></title><description><![CDATA[One of the most common questions Go developers ask is whether a value is allocated on the stack or on the heap.
And just as often, the question itself is slightly wrong.
In Go, stack vs heap is not a manual decision — it’s a result of escape analysis...]]></description><link>https://blog.devflex.pro/stack-vs-heap-in-go-how-escape-analysis-actually-works</link><guid isPermaLink="true">https://blog.devflex.pro/stack-vs-heap-in-go-how-escape-analysis-actually-works</guid><category><![CDATA[General Programming]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[best practices]]></category><dc:creator><![CDATA[Pavel Sanikovich]]></dc:creator><pubDate>Fri, 09 Jan 2026 20:36:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767990764130/dde780f2-b210-44c6-8ae7-03ace4e77def.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the most common questions Go developers ask is whether a value is allocated on the stack or on the heap.</p>
<p>And just as often, the question itself is slightly wrong.</p>
<p>In Go, stack vs heap is not a manual decision — it’s a result of escape analysis, and misunderstanding this leads to performance myths and unnecessary micro-optimizations.</p>
<p>If you’ve ever profiled a Go program and wondered why a simple function allocates memory, or why a tiny struct suddenly ends up on the heap, you’ve seen the effects of escape analysis. Beginners often learn stack and heap as if they were fixed rules — small things go on the stack, large things go on the heap — but Go doesn’t work like that at all. Size is irrelevant. What matters is lifetime.</p>
<p>A value stays on the stack only if the compiler can prove it never outlives the function that created it. The moment the lifetime becomes ambiguous, that value “escapes,” and Go places it on the heap. This is not a heuristic and not guesswork; it is a strict safety rule.</p>
<p>Understanding this rule gives you x-ray vision into your Go programs. You start predicting allocations before they happen. You see how small changes in code shape memory behavior. And most importantly, you learn to write Go the way the compiler expects — which results in faster, cleaner, more predictable programs.</p>
<p>Let's walk through this from the ground up.</p>
<h2 id="heading-what-this-article-focuses-on">What This Article Focuses On</h2>
<p>This article is not about forcing allocations onto the stack or the heap.</p>
<p>It explains how Go’s escape analysis works, why certain values escape, and why trying to “outsmart” the compiler usually backfires.</p>
<hr />
<h2 id="heading-the-stack-fast-local-temporary"><strong>The Stack: Fast, Local, Temporary</strong></h2>
<p>A stack frame exists only during the execution of a function. When the function returns, the frame disappears. If a value can be proven to stay within that frame, it is stack allocated.</p>
<p>A simple example:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">sum</span><span class="hljs-params">(a, b <span class="hljs-keyword">int</span>)</span> <span class="hljs-title">int</span></span> {
    c := a + b
    <span class="hljs-keyword">return</span> c
}
</code></pre>
<p>If we ask the compiler to show escape analysis:</p>
<pre><code class="lang-plaintext">$ go build -gcflags="-m"
&lt;no escape&gt;
</code></pre>
<p>Nothing escapes. Everything is on the stack. The compiler is even free to inline the function, meaning the variables may never exist as “variables” at all — they become registers or constants.</p>
<p>This is the ideal path: pure stack behavior, no GC pressure, no heap work.</p>
<hr />
<h2 id="heading-the-heap-for-values-with-extended-lifetime"><strong>The Heap: For Values with Extended Lifetime</strong></h2>
<p>A value must live on the heap if something outside the current stack frame needs to reference it. Returning a pointer is the most common example:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">makePtr</span><span class="hljs-params">()</span> *<span class="hljs-title">int</span></span> {
    x := <span class="hljs-number">42</span>
    <span class="hljs-keyword">return</span> &amp;x
}
</code></pre>
<p>Compiler output:</p>
<pre><code class="lang-plaintext">./main.go:4:9: &amp;x escapes to heap
</code></pre>
<p>This has nothing to do with the size of <code>x</code>. The compiler simply sees that the caller needs a reference to <code>x</code> after the function returns. The stack frame cannot hold it anymore.</p>
<p>A beginner often doesn’t realize that the <em>pointer itself</em> is not expensive — it’s the lifetime extension that forces the escape.</p>
<hr />
<h2 id="heading-closures-when-variables-quietly-escape"><strong>Closures: When Variables Quietly Escape</strong></h2>
<p>Closures are a classic place where beginners accidentally create heap allocations.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">counter</span><span class="hljs-params">()</span> <span class="hljs-title">func</span><span class="hljs-params">()</span> <span class="hljs-title">int</span></span> {
    x := <span class="hljs-number">0</span>
    <span class="hljs-keyword">return</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-title">int</span></span> {
        x++
        <span class="hljs-keyword">return</span> x
    }
}
</code></pre>
<p>Compiler output:</p>
<pre><code class="lang-plaintext">./main.go:6:13: func literal escapes to heap
./main.go:5:5: moved to heap: x
</code></pre>
<p>Why? Because the returned function continues to exist after <code>counter</code> finishes. It needs access to <code>x</code>. Therefore, <code>x</code> must move to the heap, where its lifetime is no longer tied to the stack frame.</p>
<p>Many beginners write closure-based code without realizing they are allocating memory every time.</p>
<hr />
<h2 id="heading-two-constructors-two-lifetimes-two-allocation-patterns"><strong>Two Constructors, Two Lifetimes, Two Allocation Patterns</strong></h2>
<p>This is one of the clearest ways to see escape analysis in action.</p>
<p><strong>Version 1 — returning a value:</strong></p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> User <span class="hljs-keyword">struct</span> {
    Name <span class="hljs-keyword">string</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">newUser</span><span class="hljs-params">(name <span class="hljs-keyword">string</span>)</span> <span class="hljs-title">User</span></span> {
    <span class="hljs-keyword">return</span> User{Name: name}
}
</code></pre>
<p>And now version 2 — returning a pointer:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">newUserPtr</span><span class="hljs-params">(name <span class="hljs-keyword">string</span>)</span> *<span class="hljs-title">User</span></span> {
    <span class="hljs-keyword">return</span> &amp;User{Name: name}
}
</code></pre>
<p>For the first version, the compiler often places <code>User</code> directly into the caller’s stack frame. For the pointer version, the allocation must occur on the heap.</p>
<p>Same data. Same fields. Same size. Different lifetime = different memory behavior.</p>
<p>This is why experienced Go developers say: <strong>prefer returning values unless you need shared mutable state.</strong></p>
<hr />
<h2 id="heading-escape-analysis-loves-clear-ownership"><strong>Escape Analysis Loves Clear Ownership</strong></h2>
<p>Here’s a subtle example where a tiny rewrite prevents a heap escape:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">sumSlice</span><span class="hljs-params">(nums []<span class="hljs-keyword">int</span>)</span> *<span class="hljs-title">int</span></span> {
    total := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> _, v := <span class="hljs-keyword">range</span> nums {
        total += v
    }
    <span class="hljs-keyword">return</span> &amp;total
}
</code></pre>
<p>Compiler:</p>
<pre><code class="lang-plaintext">./main.go:7:12: &amp;total escapes to heap
</code></pre>
<p>But if we write it like this:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">sumSlice</span><span class="hljs-params">(nums []<span class="hljs-keyword">int</span>)</span> <span class="hljs-title">int</span></span> {
    total := <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> _, v := <span class="hljs-keyword">range</span> nums {
        total += v
    }
    <span class="hljs-keyword">return</span> total
}
</code></pre>
<p>Now:</p>
<pre><code class="lang-plaintext">&lt;no escape&gt;
</code></pre>
<p>Exact same logic. Different lifetime semantics.</p>
<p>This is the power of understanding escape analysis: your intuition becomes aligned with the compiler.</p>
<hr />
<h2 id="heading-a-surprising-case-heap-allocations-without-pointers"><strong>A Surprising Case: Heap Allocations Without Pointers</strong></h2>
<p>Sometimes a heap escape happens even though you don’t return a pointer. A classic example:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">run</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">3</span>; i++ {
        <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
            fmt.Println(i)
        }()
    }
}
</code></pre>
<p>Compiler:</p>
<pre><code class="lang-plaintext">./main.go:5:10: func literal escapes to heap
./main.go:4:6: moved to heap: i
</code></pre>
<p>Why? Because the goroutine runs <em>after the loop iteration completes</em>. It needs access to <code>i</code>. Therefore <code>i</code> cannot live on the stack.</p>
<p>This is the precise moment when a beginner realizes that concurrency also changes lifetimes.</p>
<hr />
<h2 id="heading-what-escape-analysis-is-actually-doing"><strong>What Escape Analysis Is Actually Doing</strong></h2>
<p>It’s not trying to optimize your code. It’s proving safety.</p>
<p>If the compiler can prove a value is local → stack. If it cannot prove → heap.</p>
<p>It’s a conservative algorithm. A value will escape even when theoretically safe, simply because proving otherwise would require solving undecidable problems. Go plays it safe; that’s what keeps programs correct.</p>
<hr />
<h2 id="heading-how-to-see-escape-analysis-in-your-code"><strong>How to See Escape Analysis in Your Code</strong></h2>
<p>You can observe everything the compiler decides:</p>
<pre><code class="lang-plaintext">go build -gcflags="-m=2"
</code></pre>
<p>Or for even more detail:</p>
<pre><code class="lang-plaintext">go build -gcflags="-m -m"
</code></pre>
<p>This becomes addictive. You begin scanning your code and <em>predicting</em> escapes before the compiler prints them.</p>
<p>Once you reach that level, Go feels like a language that explains itself to you.</p>
<hr />
<h2 id="heading-why-this-matters-for-beginners"><strong>Why This Matters for Beginners</strong></h2>
<p>Understanding escape analysis is not about premature optimization. It’s about forming the mental model that Go expects you to have.</p>
<p>Once you understand lifetimes:</p>
<p>– you choose value vs pointer intentionally – you design APIs that minimize hidden allocations – your code scales better under load – you avoid concurrency pitfalls – you become predictable to the compiler</p>
<p>This is exactly where beginners stop being beginners.</p>
<hr />
<h3 id="heading-want-to-go-further">Want to go further?</h3>
<p>This series focuses on <em>understanding Go</em>, not just using it.<br />If you want to continue in the same mindset, <a target="_blank" href="https://www.educative.io/unlimited?aff=BXPO&amp;utm_source=devto&amp;utm_medium=go-series&amp;utm_campaign=affiliate"><strong>Educative</strong></a> is a great next step.</p>
<p>It’s a single subscription that gives you access to <strong>hundreds of in-depth, text-based courses</strong> — from Go internals and concurrency to system design and distributed systems. No videos, no per-course purchases, just structured learning you can move through at your own pace.</p>
<p>👉 <a target="_blank" href="https://www.educative.io/unlimited?aff=BXPO&amp;utm_source=devto&amp;utm_medium=go-series&amp;utm_campaign=affiliate"><strong>Explore the full Educative library here</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Designing a High-Load Event Processing Pipeline: When Systems Begin to Breathe]]></title><description><![CDATA[There comes a point in the life of any backend system when it stops feeling like code and starts feeling like something alive. It develops rhythm. It inhales traffic and exhales processed results. It has quiet, sleepy nights and sudden bursts of fran...]]></description><link>https://blog.devflex.pro/designing-a-high-load-event-processing-pipeline-when-systems-begin-to-breathe</link><guid isPermaLink="true">https://blog.devflex.pro/designing-a-high-load-event-processing-pipeline-when-systems-begin-to-breathe</guid><dc:creator><![CDATA[Pavel Sanikovich]]></dc:creator><pubDate>Mon, 24 Nov 2025 14:42:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763995322544/d9a088ad-a547-4dd6-bf43-c0ba61c9b666.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>There comes a point in the life of any backend system when it stops feeling like code and starts feeling like something alive. It develops rhythm. It inhales traffic and exhales processed results. It has quiet, sleepy nights and sudden bursts of frantic activity. There are pulses, irregular waves, unpredictable spikes. And somewhere along the way you realize: your service no longer “handles events.” It <em>lives inside their flow</em>.</p>
<p>When this shift happens, technical problems change. The bottlenecks no longer hide inside functions; they appear in the intervals between them. Throughput becomes secondary. The real enemy is mismatch — between how fast the world pushes events into your system, and how fast your system can actually understand, transform, and persist them.</p>
<p>This is the moment when system design becomes the real engineering challenge.</p>
<hr />
<h2 id="heading-when-the-system-breaks-for-the-first-time">When the system breaks for the first time</h2>
<p>Most pipelines start small. A simple handler, a simple queue, a simple database write. It works well enough that nobody touches it. It works so well that people forget how fragile it is.</p>
<p>Then one day traffic spikes — maybe because of a marketing blast, maybe because a partner system retries aggressively, maybe because a batch job upstream flushes a backlog all at once. Suddenly your queue begins to swell. Latency stretches. The consumer falls behind. Retries multiply until they create more load than the original traffic.</p>
<p>The pipeline stops flowing and starts drowning.</p>
<p>This is the first real lesson of high-load event systems:</p>
<p><strong>they fail not because the code is slow, but because the flow becomes unmanageable.</strong></p>
<hr />
<h2 id="heading-the-shape-of-real-traffic">The shape of real traffic</h2>
<p>No real system receives a steady, comforting stream of events. Traffic arrives in waves — sometimes elegant and predictable, often messy and violent. Humans behave in bursts. Networks behave in bursts. Distributed retries behave in violent, chaotic bursts.</p>
<p>You cannot “smooth out” these patterns.</p>
<p>A resilient pipeline accepts that the world is uneven and builds space for chaos to exist safely.</p>
<p>That space usually takes the form of a queue — not as an architectural choice, but as a survival mechanism. The queue becomes a buffer between the storm of incoming events and the calmer process of turning them into something meaningful.</p>
<hr />
<h2 id="heading-queues-as-lungs">Queues as lungs</h2>
<p>In a healthy high-load system, the queue behaves like a pair of lungs. It allows the pipeline to inhale more than it can immediately process, and exhale steadily at a rate it can sustain.</p>
<p>Kafka and Redpanda became industry standards not because of trendiness, but because of how gracefully they handle irregularity. They accept spikes without panic. They distribute load. They replay. They hold the line.</p>
<p>Once your Go service reads from Kafka, it stops worrying about the pace of incoming events. The spike has already been absorbed upstream. The only remaining question is: <em>How fast can you work through them?</em></p>
<p>This moment — when ingestion is decoupled from processing — is the first real structural victory.</p>
<hr />
<h2 id="heading-the-art-of-consumption">The art of consumption</h2>
<p>Most pipelines break not in the queue, but in the consumer.</p>
<p>A consumer is not simply a loop that reads messages. It is a balancing act between the volatility of the outside world and the steadiness of your processing layer. Bad consumers spawn thousands of goroutines. They block on slow external APIs. They freeze when downstream systems degrade. They try to do too much at once, or not enough. They behave impulsively.</p>
<p>Good consumers are almost biological in their discipline. They take messages in small, digestible batches. They know their limits. They adjust their pace when downstream systems slow down. They treat failures as routine, not emergencies. They avoid unbounded concurrency. They understand idempotency as a fundamental requirement, not a luxury.</p>
<p>A good consumer behaves less like a function and more like a living system that maintains homeostasis.</p>
<hr />
<h2 id="heading-processing-where-optimism-dies">Processing: where optimism dies</h2>
<p>Processing is the most fragile stage of the pipeline because it touches the outside world. It calls APIs that may freeze, databases that may stall, caches that may expire at the wrong moment. It transforms data that may be malformed. It tries to enforce consistency in a world that refuses to be consistent.</p>
<p>A pipeline that assumes everything will be fast and successful is doomed. A resilient pipeline assumes:</p>
<ul>
<li><p>failures are normal</p>
</li>
<li><p>retries are inevitable</p>
</li>
<li><p>latencies fluctuate</p>
</li>
<li><p>dependencies degrade</p>
</li>
<li><p>storage hesitates</p>
</li>
<li><p>the system must remain stable through all of it</p>
</li>
</ul>
<p>This is why mature pipelines use retry budgets, circuit breakers, local caching, dead-letter queues, asynchronous writes, and write-behind patterns. Not because they sound architecturally pretty, but because systems without them eventually collapse.</p>
<hr />
<h2 id="heading-storage-a-tide-not-a-constant">Storage: a tide, not a constant</h2>
<p>Every event eventually needs to land somewhere — a database, an index, a log archive. But storage systems have their own personalities. They slow down unexpectedly. They warm up slowly. They behave differently under different load profiles.</p>
<p>A pipeline that writes synchronously into storage is a pipeline chained to its slowest component.</p>
<p>A pipeline that buffers, batches, and isolates storage writes is a pipeline that can keep breathing even when the datastore has a moment of weakness.</p>
<hr />
<h2 id="heading-backpressure-as-the-foundation-of-survival">Backpressure as the foundation of survival</h2>
<p>Modern systems rarely die from too little throughput. They die from too much. A system that cannot say “no” becomes a hostage of its own good performance.</p>
<p>Backpressure gives the pipeline the ability to decline work — politely, intentionally, and safely. It prevents cascades of failures. It enforces boundaries. It makes the system self-aware.</p>
<p>Backpressure is not a feature; it is a nervous system.</p>
<hr />
<h2 id="heading-what-go-adds-to-the-equation">What Go adds to the equation</h2>
<p>Go is unusually well-suited for event-driven pipelines. Not because it is the fastest language, but because its concurrency model maps naturally onto the idea of flow. Goroutines give us lightweight execution. Channels give us decoupling. Context gives us cancellation. Worker pools give us shape and boundaries.</p>
<p>But Go won’t save you from systemic mistakes. Unbounded concurrency will still drown you. Missing timeouts will still destroy you. Poor storage design will still block you. Misbehaving consumers will still collapse under load.</p>
<p>Go gives you tools — not immunity.</p>
<hr />
<h2 id="heading-when-the-system-finally-breathes">When the system finally breathes</h2>
<p>A well-designed pipeline doesn’t feel fast; it feels calm. It doesn’t panic when traffic spikes; it absorbs. It doesn’t stall when an external system slows down; it adapts. It doesn’t cascade on failure; it isolates, retries, and moves on.</p>
<p>This calmness is the true mark of good system design.</p>
<p>A high-load event pipeline is not the one that processes the most events per second, but the one that maintains <em>rhythm</em> under pressure.</p>
<p>When your system starts breathing — steadily, predictably, effortlessly — that’s when you know the architecture is finally right.</p>
]]></content:encoded></item><item><title><![CDATA[TMA Starter Kit: Build Telegram Mini Apps Without the Setup Struggle]]></title><description><![CDATA[Hey Hashnode crew! If you’ve ever thought about building a Telegram Mini App (TMA) but got bogged down by the setup—choosing a stack, wiring up a backend, configuring Docker—then I’ve got something for you. Meet TMA Starter Kit, a no-nonsense startin...]]></description><link>https://blog.devflex.pro/tma-starter-kit-build-telegram-mini-apps-without-the-setup-struggle</link><guid isPermaLink="true">https://blog.devflex.pro/tma-starter-kit-build-telegram-mini-apps-without-the-setup-struggle</guid><category><![CDATA[telegram]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Vue.js]]></category><category><![CDATA[Telegram mini-app#]]></category><dc:creator><![CDATA[Pavel Sanikovich]]></dc:creator><pubDate>Wed, 05 Mar 2025 20:20:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741205909574/66629ba4-3993-4fc9-a3e5-74a67687c531.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey Hashnode crew! If you’ve ever thought about building a Telegram Mini App (TMA) but got bogged down by the setup—choosing a stack, wiring up a backend, configuring Docker—then I’ve got something for you. Meet <strong>TMA Starter Kit</strong>, a no-nonsense starting point I’ve been tinkering with to make Telegram Mini App development fast, fun, and frustration-free. Let’s dive into what it is, how it’s built, and why it might just be your next go-to tool.</p>
<h2 id="heading-whats-this-all-about">What’s This All About?</h2>
<p>Telegram Mini Apps are those slick little web apps that live inside Telegram bots—think interactive menus, mini-games, or order trackers. They’re awesome, but getting started? Not so much. You’ve got to juggle a frontend, a backend, a database, and deployment, all while praying it plays nice with Telegram. That’s where TMA Starter Kit comes in.</p>
<p>It’s a pre-built, batteries-included foundation: a modern frontend, a lightweight backend, and DevOps goodies, all wired up and ready to roll. The goal? Cut the setup grind so you can focus on coding the cool stuff.</p>
<h2 id="heading-why-bother-with-tma-starter-kit">Why Bother with TMA Starter Kit?</h2>
<p>I’ll level with you—building a TMA from scratch is a slog. You’re googling Vue.js configs, wrestling with Go routing, and cursing Docker networking before you even write a single feature. This kit skips the nonsense. Here’s what it brings to the table:</p>
<ul>
<li><p><strong>Zero-to-Running in Minutes</strong>: Clone it, run a Docker command, and boom—you’ve got a full app stack.</p>
</li>
<li><p><strong>Solid Tech Choices</strong>: Quasar (Vue.js) with TypeScript, Go for the API, and MongoDB—tools you’ll actually enjoy using.</p>
</li>
<li><p><strong>Docker Magic</strong>: Everything’s containerized. No “works on my machine” excuses here.</p>
</li>
<li><p><strong>Telegram-Ready</strong>: Clear steps to hook it into a bot and go live.</p>
</li>
<li><p><strong>Hackable</strong>: Modular setup means you can tweak it to fit your vibe.</p>
</li>
</ul>
<h2 id="heading-the-stack-breakdown">The Stack Breakdown</h2>
<p>Here’s how it’s pieced together—clean, simple, and purposeful:</p>
<ul>
<li><p><code>frontend/</code>: Powered by Quasar (Vue.js) with TypeScript and the Composition API. It’s where your Mini App’s UI lives—responsive, fast, and perfect for Telegram’s in-app vibe.</p>
</li>
<li><p><code>backend/</code>: A Go API that’s lean and mean. Handles your logic, talks to the frontend, and scales like a champ.</p>
</li>
<li><p><code>devops/</code>: Docker Compose files and CI/CD bits to keep it all humming. Spin up the whole stack with one command.</p>
</li>
</ul>
<p>It’s all glued together with Docker, so whether you’re on a Mac, Linux, or that sketchy Windows laptop, it just works.</p>
<h2 id="heading-how-do-you-use-it">How Do You Use It?</h2>
<p>Okay, let’s get practical. Here’s the quickstart:</p>
<ol>
<li><p><strong>Fire It Up</strong>:</p>
<pre><code class="lang-bash"> docker compose -f devops/docker-compose.dev.yml up -d
</code></pre>
<p> This launches the frontend (port 9000), backend, and MongoDB in containers. Done.</p>
</li>
<li><p><strong>Expose It</strong>: Install <a target="_blank" href="https://github.com/localtunnel/localtunnel">localtunnel</a> (<code>npm i -g localtunnel</code>), then:</p>
<pre><code class="lang-bash"> lt --port 9000 --subdomain my-cool-tma
</code></pre>
<p> You’ll get a public URL like <a target="_blank" href="https://my-cool-tma.loca.lt"><code>https://my-cool-tma.loca.lt</code></a>.</p>
</li>
<li><p><strong>Telegram Time</strong>:</p>
<ul>
<li><p>Hit up <a target="_blank" href="https://t.me/BotFather">@BotFather</a>, make a bot with <code>/newbot</code>.</p>
</li>
<li><p>Use <code>/setmenu</code> to link your localtunnel URL.</p>
</li>
<li><p>Open your bot, tap the menu, and voilà—your Mini App’s live.</p>
</li>
</ul>
</li>
</ol>
<p>Total time? Maybe 10 minutes if you’re sipping coffee slowly.</p>
<h2 id="heading-whos-this-for">Who’s This For?</h2>
<ul>
<li><p><strong>Prototypers</strong>: Got a bot idea? Test it before dinner.</p>
</li>
<li><p><strong>Side Hustlers</strong>: Building a client app? Ship it fast and look pro.</p>
</li>
<li><p><strong>Tinkerers</strong>: New to TMA or modern stacks? Play around and learn.</p>
</li>
</ul>
<h2 id="heading-a-real-example">A Real Example</h2>
<p>Picture this: you’re coding a bot to track coffee orders for your team. With TMA Starter Kit:</p>
<ul>
<li><p>Quasar frontend whips up a form in an hour.</p>
</li>
<li><p>Go backend saves orders to MongoDB with a few lines.</p>
</li>
<li><p>Docker and localtunnel get it online, and Telegram’s hosting your app by noon.</p>
</li>
</ul>
<p>From scratch? You’d still be debugging CORS errors. Trust me, I’ve been there.</p>
<h2 id="heading-why-i-built-it-and-why-you-might-care">Why I Built It (and Why You Might Care)</h2>
<p>I’m a sucker for Telegram’s ecosystem—it’s huge, engaged, and ripe for innovation. But every time I started a TMA, I wasted hours on boilerplate. So, I made this kit to scratch my own itch. If you’ve ever felt that setup pain, it might scratch yours too.</p>
<h2 id="heading-give-it-a-spin">Give It a Spin</h2>
<p>TMA Starter Kit isn’t some polished product—it’s a practical starting point, rough edges and all. Clone it, break it, make it yours. If you build something cool (or find a bug), hit me up—I’d love to hear about it.</p>
<p><a target="_blank" href="https://github.com/devflex-pro/tma-starter-kit"><strong>Check it out on GitHub</strong></a></p>
<p>Happy coding, Hashnode fam! Let’s build some killer Telegram apps together.</p>
]]></content:encoded></item></channel></rss>