<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>ariya.io</title>
    <link>https://ariya.io/</link>
    <description>Recent content on ariya.io</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sat, 28 Feb 2026 15:33:54 -0800</lastBuildDate>
    <atom:link href="https://ariya.io/index.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>GTX 1080 Ti for Local LLM</title>
      <link>https://ariya.io/2026/02/gtx-1080-ti-for-local-llm</link>
      <pubDate>Sat, 28 Feb 2026 15:33:54 -0800</pubDate>
      
      <guid>https://ariya.io/2026/02/gtx-1080-ti-for-local-llm</guid>
      <description>&lt;p&gt;Despite being over eight years old, the NVIDIA GTX 1080 Ti remains a compelling choice for enthusiasts keen on running LLM locally.&lt;/p&gt;

&lt;p&gt;Initially launched in early 2017 with a $699 MSRP, this &lt;a href=&#34;https://www.techpowerup.com/gpu-specs/geforce-gtx-1080-ti.c2877&#34;&gt;GTX 1080 Ti&lt;/a&gt; card quickly earned a “legendary GPU” reputation among tech reviewers and YouTubers. Today, it’s readily available on the second-hand market (particularly in Northern California or on &lt;a href=&#34;https://www.ebay.com/sch/i.html?_nkw=gtx+1080+ti&#34;&gt;eBay&lt;/a&gt;) for around $150, often even less if you’re lucky.&lt;/p&gt;

&lt;p&gt;For this modest price, you acquire a card with 11 GB of VRAM, an unconventional yet highly practical configuration for modern LLMs. With quantized models, 11 GB typically offers ample space for model weights and the context windows crucial for RAG workflows, coding assistants, and other use cases. This makes it an ideal sweet spot for hobbyists: &lt;em&gt;affordable&lt;/em&gt; enough for experimentation, yet powerful enough to handle practical workloads.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2026/02/gtx-1080-ti.jpg&#34; alt=&#34;GTX 1080 Ti&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Please note that local LLM inference performance is primarily assessed by two metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt Processing: How quickly the LLM processes the input context.&lt;/li&gt;
&lt;li&gt;Token Generation: How rapidly the LLM produces output (the “answer”).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These metrics, often denoted as &lt;em&gt;pp512&lt;/em&gt; and &lt;em&gt;tg128&lt;/em&gt; (where the numbers represent total tokens), can be measured using &lt;a href=&#34;https://github.com/ggml-org/llama.cpp&#34;&gt;llama.cpp&lt;/a&gt;, a very popular inference engine. As mentioned in our previous article on running LLMs locally with LM Studio and Jan, llama.cpp is one of the engines powering these applications.&lt;/p&gt;

&lt;p&gt;For LLM tasks requiring long context windows, such as RAG or coding assistant, pp512 performance is paramount. Slow pp512 leads to noticeable lag as large prompts take seconds to preprocess before any output appears. Meanwhile, for dynamic chats or creative writing, tg128 is more critical, as it dictates the fluidity of the model’s responses.
llama.cpp with CUDA&lt;/p&gt;

&lt;p&gt;To run benchmarks, you’ll first need a CUDA-enabled build of &lt;a href=&#34;https://github.com/ggml-org/llama.cpp&#34;&gt;llama.cpp&lt;/a&gt;, which involves installing NVIDIA’s &lt;a href=&#34;https://developer.nvidia.com/cuda-downloads&#34;&gt;CUDA toolkit&lt;/a&gt; and ensuring your development tools are correctly set up. A quick sanity check involves verifying the versions of nvcc, CMake, and gcc:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;nvcc --version
cmake --version
gcc --version
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If all three commands return version numbers, you’re ready to proceed.&lt;/p&gt;

&lt;p&gt;Next, clone the &lt;a href=&#34;https://github.com/ggml-org/llama.cpp&#34;&gt;llama.cpp&lt;/a&gt; repository and build it with CUDA enabled:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This compilation process can take some time, so be prepared for a short wait.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2026/02/llama-cpp-gtx-1080-ti.png&#34; alt=&#34;llama.cpp with GTX 1080 Ti&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Once built, test your setup with a small model like Google’s &lt;a href=&#34;https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF&#34;&gt;Gemma-3 1B&lt;/a&gt; (approximately 800 MB). You can interact with it via the command line (&lt;code&gt;llama-cli&lt;/code&gt;) or launch a lightweight web UI:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;./build/bin/llama-server
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Running &lt;code&gt;nvtop&lt;/code&gt; concurrently will confirm that the GPU is actively utilized during inference.&lt;/p&gt;

&lt;p&gt;With llama.cpp configured, benchmarking is straightforward using the included llama-bench tool. For consistent comparisons, Llama-2 7B Q4_0 quantization is a popular choice, being small enough to run comfortably and widely benchmarked.&lt;/p&gt;

&lt;p&gt;Execute the following command:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;./build/bin/llama-bench -ngl 100 -m llama-2-7b.Q4_0.gguf
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The output will provide both pp512 and tg128. While these speeds may not match modern GPUs, they are more than enough for local experimentation. Crucially, the pp512 performance remains competitive enough to make retrieval-heavy workloads (like document-based question answering) viable. The GTX 1080 Ti may not be the fastest, but it offers an excellent entry point into local AI.&lt;/p&gt;

&lt;p&gt;Community benchmarks in llama.cpp discussions, illustrate how this GPU compares:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CUDA (NVIDIA GPUs): &lt;a href=&#34;https://github.com/ggml-org/llama.cpp/discussions/15013&#34;&gt;https://github.com/ggml-org/llama.cpp/discussions/15013&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Metal (Apple Silicon): &lt;a href=&#34;https://github.com/ggml-org/llama.cpp/discussions/4167&#34;&gt;https://github.com/ggml-org/llama.cpp/discussions/4167&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ROCm (AMD GPUs): &lt;a href=&#34;https://github.com/ggml-org/llama.cpp/discussions/15021&#34;&gt;https://github.com/ggml-org/llama.cpp/discussions/15021&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vulkan (cross-platform): &lt;a href=&#34;https://github.com/ggml-org/llama.cpp/discussions/10879&#34;&gt;https://github.com/ggml-org/llama.cpp/discussions/10879&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These threads are invaluable resources for real-world performance data, aiding decisions on building new local inference rigs or repurposing existing hardware.&lt;/p&gt;

&lt;p&gt;A sample of these results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apple M4 Pro: pp512 = 439 tok/s and tg128 = 50 tok/s&lt;/li&gt;
&lt;li&gt;GTX 1080 Ti: pp512 = 1084 tok/s and tg128 = 62 tok/s&lt;/li&gt;
&lt;li&gt;RX 9060 XT: pp512 = 1478 tok/s and tg128 = 65 tok/s&lt;/li&gt;
&lt;li&gt;RTX 2080 Ti: pp512 = 2890 tok/s and tg128 = 107 tok/s&lt;/li&gt;
&lt;li&gt;RTX 3090: pp512 = 5174 tok/s and tg128 = 158 tok/s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href=&#34;https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-ti.c3305&#34;&gt;RTX 2080 Ti&lt;/a&gt;, the 1080 Ti’s younger sibling, also features 11 GB VRAM. It represents a solid upgrade for increased speed, often found for around $250 on eBay if you’re fortunate with a bid.&lt;/p&gt;

&lt;p&gt;For accelerated LLMs that can run on a GTX 1080 Ti with 4-bit quantization and enough VRAM for context processing, popular choices include Qwen 3 8B, Llama 3.1 8B, Gemma 3 12B, and Granite 4.0 Tiny 7B. If speed is a higher priority, smaller variants (e.g., Qwen 3 4B) are also excellent candidates.&lt;/p&gt;

&lt;p&gt;Ultimately, the GTX 1080 Ti’s most significant advantage is its cost. At approximately $150 on the used market, it leaves considerable budget for the rest of your system. A complete, cost-effective build can come in under $500, with an example shown in the accompanying photo. We’ll delve deeper into this specific rig and its capabilities in upcoming newsletter articles, so stay tuned!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: This article originally &lt;a href=&#34;https://remotebrowser.substack.com/p/gtx-1080-ti-for-local-llm&#34;&gt;appeared&lt;/a&gt; on the Remote Browser Substack.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2026/02/itx-gtx-1080-ti.jpg&#34; alt=&#34;ITX build with GTX 1080 Ti&#34; /&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Not Everything is an Agent</title>
      <link>https://ariya.io/2025/03/not-everything-is-an-agent</link>
      <pubDate>Mon, 31 Mar 2025 22:47:17 -0800</pubDate>
      
      <guid>https://ariya.io/2025/03/not-everything-is-an-agent</guid>
      <description>&lt;p&gt;&amp;ldquo;Agent&amp;rdquo; is likely going to be the word that will cause existential dread to true LLM enthusiasts.&lt;/p&gt;

&lt;p&gt;Everyone&amp;rsquo;s got a different idea of what it means. In our modern age of innovation theater, lots of organizations gleefully slap the &amp;ldquo;agentic&amp;rdquo; label on anything that vaguely resembles a regular program (and pocket tons of money). Even a simple HTTP call to an LLM-as-a-Service can be called an agent, if you try desperately hard enough.&lt;/p&gt;

&lt;p&gt;The internet, as always, is flooded with &amp;ldquo;groundbreaking&amp;rdquo; tutorials on building these so-called agents. Often authored by the latest &lt;em&gt;hypefluencers&lt;/em&gt;, they typically involve a few lines (probably generated by whatever coding assistant is currently trending on Hacker News) that compose LangChain and an Ollama instance, often being presented as the pinnacle of AI autonomy. Because why bother with actual innovation when you can just repeat the quasi-boilerplate code &lt;em&gt;ad nauseam&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;That&amp;rsquo;s why I liked it &lt;strong&gt;a lot&lt;/strong&gt; when the Anthropic article, &lt;a href=&#34;https://www.anthropic.com/engineering/building-effective-agents&#34;&gt;Building effective agents&lt;/a&gt;, came out, as it dares to suggest that simply bolting on retrieval or memory to an LLM does &lt;em&gt;not&lt;/em&gt;, in fact, make an agent. And chaining or routing? That&amp;rsquo;s just glorified &lt;em&gt;control flow&lt;/em&gt;, folks. Only when an LLM is tasked with truly complex, real-world tasks such as coding or using a computer, does it begin to resemble the autonomous agent we&amp;rsquo;ve been promised&lt;/p&gt;

&lt;p&gt;So how do you identify a real agent? Don&amp;rsquo;t be fooled by the grand pronouncements of those rearranging deck chairs on the Titanic. Ask for the receipts of successful evaluations! Anecdotal evidence of a few successful LLM calls isn&amp;rsquo;t that useful. Remember, in the world of LLMs, as in life, the loudest claims are often the emptiest!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Afterburner and Power Limit</title>
      <link>https://ariya.io/2025/02/afterburner-and-power-limit</link>
      <pubDate>Fri, 28 Feb 2025 21:37:19 -0800</pubDate>
      
      <guid>https://ariya.io/2025/02/afterburner-and-power-limit</guid>
      <description>&lt;p&gt;Ever witnessed a fighter jet spewing hot flames as it kicks into afterburner? In that moment, efficiency is deliberately sacrificed for maximum acceleration.&lt;/p&gt;

&lt;p&gt;In the midst of combat, efficiency means nothing when your life is on the line. The jet engine must keep roaring, before the pilot gets taken down by the enemy (and potentially meets their maker).&lt;/p&gt;

&lt;p&gt;A GPU faces a similar fate. When pushed to consume hundreds of watts to churn out LLM tokens at the user&amp;rsquo;s breakneck speed, there&amp;rsquo;s no choice but to run as fast as possible, even if sweat is pouring and muscle fatigue reaches its peak.&lt;/p&gt;

&lt;p&gt;Fortunately, &lt;code&gt;nvidia-smi&lt;/code&gt;, with its &lt;code&gt;pl&lt;/code&gt; (&lt;em&gt;power limit&lt;/em&gt;) option, can be used to set an upper limit on power consumption, so the GPU doesn&amp;rsquo;t go completely overboard. Those last few dozen watts often don&amp;rsquo;t make a significant difference in performance, but they definitely contribute to heat generation, which needs to be monitored.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2025/02/powerlimit.png&#34; alt=&#34;Power Limit&#34; /&gt;&lt;/p&gt;

&lt;p&gt;From the graph (measured with &lt;code&gt;llama-bench&lt;/code&gt;, for the Mistral-7B-Instruct model, Q4), it&amp;rsquo;s evident that pushing the power further doesn&amp;rsquo;t lead to increased LLM speed. 250, 300, or even 350 watts, it&amp;rsquo;s more or less the same. Meanwhile, dropping to 200 watts does slightly decrease the speed, but it&amp;rsquo;s very worthwhile considering the power consumption is reduced by a third.&lt;/p&gt;

&lt;p&gt;Saving energy is always a wise choice!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Privacy-Preserving Personal Search Appliance</title>
      <link>https://ariya.io/2025/01/privacy-preserving-personal-search-appliance</link>
      <pubDate>Thu, 30 Jan 2025 20:27:09 -0800</pubDate>
      
      <guid>https://ariya.io/2025/01/privacy-preserving-personal-search-appliance</guid>
      <description>&lt;p&gt;This is powered by SearXNG, an excellent open-source meta search engine.&lt;/p&gt;

&lt;p&gt;Unlike traditional search engines like Google or Bing, &lt;a href=&#34;https://github.com/searxng/searxng&#34;&gt;SearXNG&lt;/a&gt; doesn’t crawl the web and index content. Instead, it leverages other search engines like DuckDuckGo, Qwant, and Mojeek to fetch results while protecting your privacy. This means your personal information isn’t tracked by those upstream services.&lt;/p&gt;

&lt;p&gt;With the rise of LLMs and RAG, SearXNG has gained even more popularity. But I’ll dive into that in a future post.&lt;/p&gt;

&lt;p&gt;Setting up SearXNG is a breeze. You can use Docker or &lt;a href=&#34;https://podman.io&#34;&gt;Podman&lt;/a&gt; (my favorite Docker replacement, everyone should use it!) to get it running quickly. In fact, I encourage you to try it on your main machine. You’ll be surprised how easy it is!&lt;/p&gt;

&lt;p&gt;A little fun fact about the name. SearXNG is an active fork of SearX. As typically the case, NG usually stands for &amp;ldquo;next generation&amp;rdquo;. However, the X in SearX is actually the Greek letter &lt;em&gt;chi&lt;/em&gt;, which is often transliterated as &amp;ldquo;ch&amp;rdquo;. So, you could think of SearXNG as &amp;ldquo;searching&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;While you can run SearXNG on your main machine, a dedicated device offers several advantages. You can share it with your family or colleagues and add extra security layers like Tailscale, Wireguard, or the good old OpenVPN.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2025/01/searxng.jpg&#34; alt=&#34;SearXNG Appliance&#34; /&gt;&lt;/p&gt;

&lt;p&gt;For my little box, I chose a used Shuttle DH170 with an Intel Core i3 6100 (2 cores, 4 threads) and 16GB of RAM. This might seem like overkill, but it’s more than enough for SearXNG. The 200GB SSD is also plenty of storage. The total cost of the hardware was just $70. I could have saved more by using less RAM and storage, but I had these components on hand.&lt;/p&gt;

&lt;p&gt;In terms of power consumption, the appliance idles at around 10W. I haven’t optimized it yet using tools like PowerTOP, but even so, it’s quite efficient. The off-the-shelf x86 architecture offers excellent upgrade potential. I could easily swap in a more powerful CPU like an Intel Core i7 6700 (4 cores, 8 threads) or add more RAM and storage if needed.&lt;/p&gt;

&lt;p&gt;Initially, I considered using &lt;a href=&#34;https://www.proxmox.com&#34;&gt;Proxmox&lt;/a&gt; to manage the system and run SearXNG in a container. However, I found this to be too complex. Instead, I opted for a simpler approach using vanilla Debian and Podman.Then, I remembered that there is project, &lt;a href=&#34;https://casaos.io&#34;&gt;CasaOS&lt;/a&gt;, a user-friendly home server OS. It offers a web-based interface for remote management and can easily run SearXNG. If you’re new to home servers, CasaOS is a great way to get started.&lt;/p&gt;

&lt;p&gt;For those who prefer a more resource-constrained solution, you could use a Raspberry Pi or similar device. CasaOS also works well on ARM-based systems.&lt;/p&gt;

&lt;p&gt;In today’s digital age, privacy is a fundamental right. Unfortunately, our digital footprints are being exploited, and our personal data is being harvested. Rampant privacy violations are becoming the norm. Let’s take proactive steps to protect ourselves and our loved ones!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>LLM Inference Machine for $300</title>
      <link>https://ariya.io/2024/12/llm-inference-machine-for-300</link>
      <pubDate>Fri, 27 Dec 2024 20:17:14 -0800</pubDate>
      
      <guid>https://ariya.io/2024/12/llm-inference-machine-for-300</guid>
      <description>&lt;p&gt;You can absolutely run &lt;a href=&#34;https://qwenlm.github.io&#34;&gt;Qwen-2.5 32B&lt;/a&gt;. And of course, &lt;a href=&#34;https://ai.meta.com/blog/meta-llama-3-1/&#34;&gt;Llama-3.1 8B&lt;/a&gt; and &lt;a href=&#34;https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/&#34;&gt;Llama-3.2 Vision 11B&lt;/a&gt; are no problem at all.&lt;/p&gt;

&lt;p&gt;Now, before you get too excited, there&amp;rsquo;s a catch: this rig won&amp;rsquo;t break any speed records (more on that later). But if you&amp;rsquo;re after a budget-friendly way to do LLM research, this build might be just what you need.&lt;/p&gt;

&lt;p&gt;Here&amp;rsquo;s a breakdown of the parts and the amazing prices I got them for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AMD Ryzen 5 3400G: $50&lt;/li&gt;
&lt;li&gt;Gigabyte X570 motherboard: $30&lt;/li&gt;
&lt;li&gt;16 GB DDR4-3200 RAM: $30&lt;/li&gt;
&lt;li&gt;512 GB SSD: $20&lt;/li&gt;
&lt;li&gt;NVIDIA Tesla M40: $100&lt;/li&gt;
&lt;li&gt;Cooler for M40: $30&lt;/li&gt;
&lt;li&gt;EVGA 750W PSU: $20&lt;/li&gt;
&lt;li&gt;Silverstone HTPC case: $20&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The motherboard was a crazy good find, a broken PCIe latch got me a killer deal. The Ryzen 3400G is outdated by today&amp;rsquo;s standards, with only 4 cores and 8 threads, but for a GPU-focused inference rig, it&amp;rsquo;s more than enough. Bonus: its Vega iGPU frees up the PCIe slot for the real star of the show, the &lt;a href=&#34;https://www.techpowerup.com/gpu-specs/tesla-m40.c2771&#34;&gt;M40 GPU&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Speaking of the GPU, it&amp;rsquo;s a Maxwell-era data center card with a massive 24GB of VRAM. That much memory is essential for running hefty 32B models (quantized, of course).&lt;/p&gt;

&lt;p&gt;While you can find a used M40 on eBay for around $90 these days, I had to buy an additional cooling solution (two small fans in a 3D-printed shroud), since data center GPUs usually don&amp;rsquo;t come with coolers or blowers like their consumer counterparts.&lt;/p&gt;

&lt;p&gt;Here are the token generation speeds for several instruction-tuned models, quantized to 4-bit (Q4_K_M), measured with &lt;a href=&#34;https://github.com/ggerganov/llama.cpp/tree/master/examples/llama-bench&#34;&gt;llama-bench&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Phi-3.5 Mini: 47 tok/s&lt;/li&gt;
&lt;li&gt;Mistral 7B: 30 tok/s&lt;/li&gt;
&lt;li&gt;Llama-3.1 8B: 28 tok/s&lt;/li&gt;
&lt;li&gt;Mistral Nemo 12B: 19 tok/s&lt;/li&gt;
&lt;li&gt;Qwen-2.5 Coder 32B: 7 tok/s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2024/12/local-llm-machine.jpg&#34; alt=&#34;Local LLM Machine&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Performance is all relative. Compared to the latest RTX 3000 series, the M40 is definitely the slower sibling: about 5x slower, to be exact. But then again, an RTX 3090 is roughly 10x more expensive. Meanwhile, a more affordable RTX 3080 might limit your options with its 10GB (or 12GB for the enthusiast version) of VRAM.&lt;/p&gt;

&lt;p&gt;An RTX 2080 Ti with 11GB VRAM could be a nice upgrade. Prices in the used market are dropping ($250 or less at the time of writing), and it delivers a solid 3x speed boost compared to the M40. Double the cost for triple the speed? That&amp;rsquo;s a pretty sweet deal!&lt;/p&gt;

&lt;p&gt;How about Apple Silicon? The M2 Pro with its Metal GPU is roughly 25% faster than the M40. It wins easily in areas like portability, efficiency, and noise levels, but it comes with a significantly higher cost.&lt;/p&gt;

&lt;p&gt;Coding assistance is a proven home-run use case for powerful LLMs. This is where the M40&amp;rsquo;s massive 24GB VRAM shines, enabling you to run the fantastic &lt;a href=&#34;https://qwenlm.github.io&#34;&gt;Qwen-2.5 Coder 32B model&lt;/a&gt;. Pair it with &lt;a href=&#34;https://www.continue.dev/&#34;&gt;Continue.dev&lt;/a&gt; as your coding assistant, and you’ve got a powerful combo that could replace tools like &lt;a href=&#34;https://github.com/features/copilot&#34;&gt;GitHub Copilot&lt;/a&gt; or &lt;a href=&#34;https://codeium.com&#34;&gt;Codeium&lt;/a&gt;, particularly for medium-complexity projects.&lt;/p&gt;

&lt;p&gt;The best part? Privacy and data security. With local LLM inference, your precious source code stays on your machine.&lt;/p&gt;

&lt;p&gt;Now, should I go all in? Is it time to add a second M40?&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Deploying an Uberjar to Dokku</title>
      <link>https://ariya.io/2023/02/deploying-an-uberjar-to-dokku</link>
      <pubDate>Thu, 23 Feb 2023 11:37:36 -0800</pubDate>
      
      <guid>https://ariya.io/2023/02/deploying-an-uberjar-to-dokku</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://dokku.com&#34;&gt;Dokku&lt;/a&gt; is a self-hosted Platform-as-a-Service (PaaS) that offers a compelling alternative to popular PaaS solutions like Heroku. With built-in support for Linux containers, deploying an application on Dokku is straightforward. However, there is a lesser-known deployment method that involves sending a build artifact, such as a JAR package for Java apps, directly to Dokku.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://dokku.com/docs/development/plugin-triggers/#git-from-archive&#34;&gt;This deployment method&lt;/a&gt; is useful when there is a need to quickly and frequently deploy the latest version of a custom application. By skipping the process of creating a container image, developers can focus on building the artifact for local development. This approach can be applied to packaged applications built with various programming languages, including Python, Java, JavaScript,  PHP, etc.&lt;/p&gt;

&lt;p&gt;To follow this process, it is necessary to have the packaged Java application in the form of &lt;a href=&#34;https://stackoverflow.com/q/11947037&#34;&gt;an Uberjar&lt;/a&gt;, i.e. a JAR archive that contains all dependencies and can be executed by the JVM without requiring additional packages at runtime. The process assumes that Dokku has been installed on a machine named &lt;code&gt;dokku.homelab.lan&lt;/code&gt;, and the &lt;code&gt;dokku&lt;/code&gt; command is working properly:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ dokku version
dokku version 0.29.4
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Also, due to some heavy building that is going to happen on that Dokku machine, make sure there is an ample free capacity on the disk, ideally 8 GB or more (depending on the application).&lt;/p&gt;

&lt;p&gt;If we are deploying an Uberjar, obviously that Uberjar needs to exist first. For this example, I am using an Uberjar from the open-source edition of &lt;a href=&#34;https://metabase.com&#34;&gt;Metabase&lt;/a&gt; (adjust things to suit your need). Note that Metabase is written in &lt;a href=&#34;https://clojure.org&#34;&gt;Clojure&lt;/a&gt;, not Java, though it runs on JVM. In theory, any other JVM languages (e.g. Kotlin, Scala, etc) can work as well.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ curl -OL https://downloads.metabase.com/v0.45.2/metabase.jar
$ file ./metabase.jar 
./metabase.jar: Zip archive data, at least v1.0 to extract, compression method=store
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Two auxiliary files, a &lt;code&gt;Procfile&lt;/code&gt; and a &lt;code&gt;Dockerfile&lt;/code&gt;, are required. The &lt;code&gt;Procfile&lt;/code&gt; contains a single line of code that specifies how the application is executed, while the &lt;code&gt;Dockerfile&lt;/code&gt; details the construction of the container.&lt;/p&gt;

&lt;p&gt;The first file, &lt;code&gt;Procfile&lt;/code&gt;, is this one-liner:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;web: java -jar metabase.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The second file,  &lt;code&gt;Dockerfile&lt;/code&gt;, is not too strange to those who are familiar with Docker:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;FROM eclipse-temurin:17
WORKDIR /app
COPY . ./ 
RUN java -version
RUN ls -l /app 
EXPOSE 3000 
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The base image is set to the latest Long-Term Support (LTS) version of OpenJDK v17 using the Eclipse Temurin distribution from &lt;a href=&#34;https://adoptium.net&#34;&gt;Adoptium&lt;/a&gt;. The &lt;code&gt;EXPOSE&lt;/code&gt; line indicates the port Metabase uses, which is port 3000. The two optional &lt;code&gt;RUN&lt;/code&gt; lines are useful for debugging or resolving any issues that may arise.&lt;/p&gt;

&lt;p&gt;Next, we package the necessary files into a tarball by executing the following commands:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ tar cvf package.tar Procfile Dockerfile metabase.jar
$ file ./package.tar 
./package.tar: POSIX tar archive (GNU)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Before sending the tarball to Dokku, we must create an application. This is achieved by executing the following commands:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ dokku apps:create metabase
-----&amp;gt; Creating metabase...
-----&amp;gt; Creating new app virtual host file...
$ dokku proxy:ports-set metabase http:80:3000
dokku proxy:ports-set metabase http:80:3000
-----&amp;gt; Setting config vars
       DOKKU_PROXY_PORT_MAP:  http:80:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The last command maps the host&amp;rsquo;s port 80 to the container&amp;rsquo;s exposed port 3000. And now, the fun starts!&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ cat package.tar | dokku git:from-archive metabase --
-----&amp;gt; Fetching tar file from stdin
-----&amp;gt; Generating build context
       Striping 0 worth of directories from tarball
       Moving unarchived files and folders into place
-----&amp;gt; Updating git repository with specified build context
-----&amp;gt; Cleaning up...
-----&amp;gt; Building metabase from Dockerfile
-----&amp;gt; Setting config vars
       DOKKU_DOCKERFILE_PORTS:  3000
Sending build context to Docker daemon  271.7MB
Step 1/12 : FROM eclipse-temurin:17
Digest: sha256:f6562feb32844d0059616d6e54c6cc3127ccf77fb594ccb98cc4279ca15887ed
Status: Downloaded newer image for eclipse-temurin:17
 ---&amp;gt; 1e117025f42d
Step 2/12 : WORKDIR /app
 ---&amp;gt; Running in 89d26eed69f3
 ---&amp;gt; db157924a857
Step 3/12 : COPY . ./
 ---&amp;gt; 59e836261c66
Step 4/12 : RUN java -version
 ---&amp;gt; Running in 21df4266e534
openjdk version &amp;quot;17.0.6&amp;quot; 2023-01-17
OpenJDK Runtime Environment Temurin-17.0.6+10 (build 17.0.6+10)
OpenJDK 64-Bit Server VM Temurin-17.0.6+10 (build 17.0.6+10, mixed mode, sharing)
 ---&amp;gt; 16d451db8f1a
Step 5/12 : RUN ls -l /app
 ---&amp;gt; Running in 829a2df4f10e
total 265328
-rw-r--r-- 1 root root        92 Jan 31 02:57 Dockerfile
-rw-r--r-- 1 root root 271686194 Jan 31 02:57 metabase.jar
-rw-r--r-- 1 root root        28 Jan 31 02:57 Procfile
 ---&amp;gt; 67b7b1179da7
Step 6/12 : EXPOSE 3000
Step 7/12 : LABEL com.dokku.app-name=metabase
Step 8/12 : LABEL com.dokku.builder-type=dockerfile
Step 9/12 : LABEL com.dokku.image-stage=build
Step 10/12 : LABEL dokku=
Step 11/12 : LABEL org.label-schema.schema-version=1.0
Step 12/12 : LABEL org.label-schema.vendor=dokku
Successfully built a246db231b6f
Successfully tagged dokku/metabase:latest
-----&amp;gt; Releasing metabase...
-----&amp;gt; Checking for predeploy task
       No predeploy task found, skipping
-----&amp;gt; Checking for release task
       No release task found, skipping
-----&amp;gt; Checking for first deploy postdeploy task
       No first deploy postdeploy task found, skipping
-----&amp;gt; Deploying metabase via the docker-local scheduler...
-----&amp;gt; Configuring metabase.dokku.homelab.lan...(using built-in template)
-----&amp;gt; Creating http nginx.conf
       Reloading nginx
=====&amp;gt; Application deployed:
       http://metabase.dokku.homelab.lan
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The log may appear lengthy, but the steps it displays should be straightforward. On the Dokku target machine, a container image is constructed using the information in the tarball. From that image, a container is created and deployed using the standard Dokku machinery. If all goes well, the application (Metabase in this instance) will be up and running at the specified hostname.&lt;/p&gt;

&lt;p&gt;As you become more familiar with this method, sending build artifacts to Dokku after each change will become a natural part of your workflow!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Continuous Integration for React Native Apps with GitHub Actions</title>
      <link>https://ariya.io/2020/12/continuous-integration-for-react-native-apps-with-github-actions</link>
      <pubDate>Tue, 29 Dec 2020 18:03:18 -0800</pubDate>
      
      <guid>https://ariya.io/2020/12/continuous-integration-for-react-native-apps-with-github-actions</guid>
      <description>&lt;p&gt;For React Native mobile apps targeting Android and iOS, an easy way to setup its continuous integration is to take advantage of Actions, an automation workflow service provided by GitHub. Even better, for open-source projects, GitHub Action offers unlimited free running minutes (at the time of this writing).&lt;/p&gt;

&lt;p&gt;The advantage of &lt;a href=&#34;https://reactnative.dev/&#34;&gt;React Native&lt;/a&gt; is a single code-base targeting two major mobile platforms, iOS and Android. However, care must be taken so that when one developer focuses on implementing features on fixing defects on Android, whatever they check in into the code will not break iOS and vice versa. Ideally, that developer should always check and verify for both platforms. But mistakes happen and the best way to catch them is to ensure that the corresponding continuous integration (CI) is running smoothly to catch those potential problems early on.&lt;/p&gt;

&lt;p&gt;Thanks to &lt;a href=&#34;https://docs.github.com/en/free-pro-team@latest/actions&#34;&gt;GitHub Actions&lt;/a&gt; supporting running the &lt;a href=&#34;https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions&#34;&gt;workflow&lt;/a&gt; on macOS and Linux (also actually Windows, but that is not too relevant for this purpose), creating a CI for React Native is easy enough. To follow along, check the sample project (in the style of Hello world) that I have created at &lt;a href=&#34;https://github.com/ariya/hello-react-native&#34;&gt;github.com/ariya/hello-react-native&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let us start with the Android build since it is the easiest. Create a file with the name &lt;code&gt;android.yml&lt;/code&gt; under the directory &lt;code&gt;.github/workflows&lt;/code&gt;. The content should be like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;name: Android

on: [push, pull_request]

jobs:
  build:
    runs-on: ubuntu-20.04
    steps:
    - uses: actions/checkout@v2
    - name: Use Node.js v12
      uses: actions/setup-node@v1
      with:
        node-version: 12.x

    - run: npm ci

    - run: ./gradlew assembleDebug -Dorg.gradle.logging.level=info
      working-directory: android
      name: Build Android apk (debug)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The above YAML declares that this workflow must be executed for every pull request and also once it is merged, as well as when a commit is pushed into the source repo. The workflow runs on a Ubuntu 20.04 machine which is, thanks to GitHub, &lt;a href=&#34;https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners&#34;&gt;already equipped&lt;/a&gt; with some development packages, including Java, Android SDK, and many other bits and pieces necessary for Android development. The first step is to check out the code (obvious) followed by another step to pick the &lt;a href=&#34;https://nodejs.org/&#34;&gt;Node.js&lt;/a&gt; version (12 in this case, feel free to adjust it to your project). The step &lt;code&gt;npm ci&lt;/code&gt; will install all the dependencies. The next step after that is invoking &lt;a href=&#34;https://gradle.org/&#34;&gt;Gradle&lt;/a&gt; to build the app, just as it is being done for a local development machine.&lt;/p&gt;

&lt;p&gt;Once this file is ready, commit it to the repo, push the branch, and voila! GitHub will start to execute that build process for any future branch push and also for all pull requests (for this simple demo project, the build process takes about 3 minutes or less, not bad at all!). If the pull request does not break the Android build, we will see the usual green checkmark, as illustrated below. Of course, if the build breaks, the failure will be displayed and we can track the build log to find out what has gone wrong (this helps to accelerate the troubleshooting).&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/12/rn-pr.png&#34; alt=&#34;Pull request&#34; /&gt;&lt;/p&gt;

&lt;p&gt;For completeness, we can also have the &lt;a href=&#34;https://docs.github.com/en/free-pro-team@latest/actions/guides/storing-workflow-data-as-artifacts&#34;&gt;build artifact&lt;/a&gt;, the APK file generated by Gradle, archived for every workflow run. To do that, add the following lines:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;    - uses: actions/upload-artifact@v2
      with:
        name: android-apk
        path: &#39;**/*.apk&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Clicking on the green mark icon on the commit view will lead to the detailed result of Action workflows for that particular commit. We can also find the link to the archived artifact, in this case the APK files. Since the artifacts are &lt;a href=&#34;https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/configuring-the-retention-period-for-github-actions-artifacts-and-logs-in-your-repository&#34;&gt;retained for some time&lt;/a&gt; (based on the project settings, default to 30 days), this can be very handy when we want to troubleshoot a problem. Let us say a certain feature does not work anymore with today&amp;rsquo;s build but we are confident that the same feature still worked with the build from last week. Rather than checkout out different revisions and rebuild the app, we can just grab the APK files. Since these are built in debug mode, we can comfortably launch it in an emulator and debug it just like an APK that we build on a local development environment.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/12/rn-artifact.png&#34; alt=&#34;Artifact&#34; /&gt;&lt;/p&gt;

&lt;p&gt;How about building for iOS? It is not exactly the same but it follow the same principles. Here is a  minimalistic workflow file, &lt;code&gt;ios.yml&lt;/code&gt;, as a starting point:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;name: iOS
on: [push, pull_request]
jobs:
  build:
    runs-on: macos-latest
    steps:
    - uses: actions/checkout@v2
    - name: Use Node.js
      uses: actions/setup-node@v1
      with:
        node-version: 14.x
    - run: npm ci
    - run: xcode-select -p
    - run: pod install
      working-directory: ios
      name: Install pod dependencies
    - name: Build iOS (debug)
      run: &amp;quot;xcodebuild \
        -workspace ios/HelloReactNative.xcworkspace \
        -scheme HelloReactNative \
        clean archive \
        -sdk iphoneos \
        -configuration Debug \
        -UseModernBuildSystem=NO \
        -archivePath $PWD/HelloReactNative \
        CODE_SIGNING_ALLOWED=NO&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The first few lines are just like the Android workflow. The major difference here is that the workflow needs to run on a macOS machine, as iOS SDK isn&amp;rsquo;t available on neither Linux nor Windows. The build steps follow a similar pattern: check out the code, setup Node.js, and install dependencies. There are two extra steps. The first one is to run &lt;code&gt;xcode-select -p&lt;/code&gt; to ensure the readiness of the correct Xcode and its related tool. The second one, &lt;code&gt;pod install&lt;/code&gt;, is used to install any dependencies for &lt;a href=&#34;https://cocoapods.org/&#34;&gt;CocoaPods&lt;/a&gt;, assuming that the project is using CocoaPods to manage iOS-specific dependencies (usually it is). After that, we invoke the command-line debug build with Xcode, just like what we would do on a local machine. Since building for iOS is a bit more complicated, for this simple demo project, it will run for around 10 minutes, give or take.&lt;/p&gt;

&lt;p&gt;Note that the above YAML files cover the build for iOS and Android. Please do not forget to create another workflow file that runs the tests, typically with Jest, to catch potential regression on the unit tests and/or integration tests. Often times, this workflow is also the best place to run various static and dynamic code analyzers (linters, code formatter, security scanners, etc).&lt;/p&gt;

&lt;p&gt;Armed with three workflow YAML files, we fully established a simple and yet powerful continuous integration for React Native apps. Happy developing!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>On GitHub Actions with MSYS2</title>
      <link>https://ariya.io/2020/07/on-github-actions-with-msys2</link>
      <pubDate>Fri, 31 Jul 2020 20:33:31 -0700</pubDate>
      
      <guid>https://ariya.io/2020/07/on-github-actions-with-msys2</guid>
      <description>&lt;p&gt;Thanks to the complete GitHub Actions for MSYS2, it is easier than ever to construct a continuous integration setup for building with compilers and toolchains which can run on MSYS2.&lt;/p&gt;

&lt;p&gt;The details are available on the official page, &lt;a href=&#34;https://github.com/marketplace/actions/setup-msys2&#34;&gt;github.com/marketplace/actions/setup-msys2&lt;/a&gt;. However, perhaps it is best illustrated with a simple but concrete example. As usual, for this illustration, you will see the use of this simplistic Hello, world program in ANSI C. To follow along, check out its repository at &lt;a href=&#34;https://github.com/ariya/hello-c90&#34;&gt;github.com/ariya/hello-c90&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let us take a look at this workflow setup to build this C program with GCC on &lt;a href=&#34;https://www.msys2.org/&#34;&gt;MSYS2&lt;/a&gt; (on Windows, obviously):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;name: amd64_windows_gcc
jobs:
  amd64_windows_gcc:
    runs-on: windows-2019
    defaults:
      run:
        shell: msys2 {0}
    steps:
    - uses: actions/checkout@v2
    - uses: msys2/setup-msys2@v2
      with:
        install: gcc make
    - run: gcc -v
    - run: make CC=gcc
    - run: file ./hello.exe
    - run: ./hello
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The important lines are for the &lt;code&gt;setup-msys2&lt;/code&gt; section. The &lt;code&gt;install&lt;/code&gt; value allows an easy selection of  various &lt;a href=&#34;https://packages.msys2.org/search&#34;&gt;packages&lt;/a&gt; which shall be installed before proceeding to the next step. For this purpose, it is sufficient to install &lt;code&gt;gcc&lt;/code&gt; and &lt;code&gt;make&lt;/code&gt;, but YMMV.&lt;/p&gt;

&lt;p&gt;The rest is self-explanatory. Please note also the &lt;code&gt;defaults&lt;/code&gt; section earlier, this is convenient to set the default shell so that we do not need to explicitly call it for every single &lt;code&gt;run&lt;/code&gt; thereafter.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/07/msys2.png&#34; width=&#34;642&#34; height=&#34;310&#34;&gt;&lt;/p&gt;

&lt;p&gt;Now let us come up with another variant, this time for &lt;a href=&#34;https://clang.llvm.org/&#34;&gt;Clang&lt;/a&gt; instead of GCC (read also my previous post: &lt;a href=&#34;https://ariya.io/2020/01/clang-on-windows/&#34;&gt;Clang for Windows&lt;/a&gt;)&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;name: amd64_windows_clang
jobs:
  amd64_windows_clang:
    runs-on: windows-2019
    defaults:
      run:
        shell: msys2 {0}
    steps:
    - uses: actions/checkout@v2
    - uses: msys2/setup-msys2@v2
      with:
        install: make mingw-w64-x86_64-clang
    - run: clang --version
    - run: make CC=clang
    - run: file ./hello.exe
    - run: ./hello
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Pretty straightforward, isn&amp;rsquo;t it? We just change the package to be installed and the compiler to be used. Since the two YAML files are very similar, to avoid a lot of repeated steps, we can parametrize it as follows. This is basically taking advantage of the &lt;a href=&#34;https://docs.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#configuring-a-build-matrix&#34;&gt;matrix strategy feature&lt;/a&gt; of GitHub Actions.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;  amd64_windows:
    runs-on: windows-2019
    strategy:
      matrix:
        compiler: [gcc, clang]
    defaults:
      run:
        shell: msys2 {0}
    steps:
    - uses: actions/checkout@v2
    - uses: msys2/setup-msys2@v2
    - run: pacman --noconfirm -S make gcc
      if: ${{ matrix.compiler == &#39;gcc&#39; }}
    - run: pacman --noconfirm -S make mingw-w64-x86_64-clang
      if: ${{ matrix.compiler == &#39;clang&#39; }}
    - run: ${{ matrix.compiler }} --version
    - run: make CC=${{ matrix.compiler }}
    - run: file ./hello.exe
    - run: ./hello
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To take it one step further, we can also support both i686 and AMD64 platforms in the same YML, again by parametrizing the architecture. Here is how it looks:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;name: windows
jobs:
  windows:
    runs-on: windows-2019
    strategy:
      matrix:
        compiler: [gcc, clang]
        msystem: [MINGW32, MINGW64]
    defaults:
      run:
        shell: msys2 {0}
    steps:
    - uses: actions/checkout@v2
    - uses: msys2/setup-msys2@v2
      with:
        msystem: ${{ matrix.msystem }}
        install: make
    - run: pacman --noconfirm -S gcc
      if: ${{ matrix.compiler == &#39;gcc&#39; }}
    - run: pacman --noconfirm -S mingw-w64-x86_64-clang
      if: ${{ (matrix.msystem == &#39;MINGW64&#39;) &amp;amp;&amp;amp; (matrix.compiler == &#39;clang&#39;) }}
    - run: pacman --noconfirm -S mingw-w64-i686-clang
      if: ${{ (matrix.msystem == &#39;MINGW32&#39;) &amp;amp;&amp;amp; (matrix.compiler == &#39;clang&#39;) }}
    - run: ${{ matrix.compiler }} --version
    - run: make CC=${{ matrix.compiler }}
    - run: file ./hello.exe
    - run: ./hello
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That&amp;rsquo;s all 4 combinations, 32-bit and 64-bit, for each GCC and Clang, in a simple configuration!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Cross-compiling with musl Toolchains</title>
      <link>https://ariya.io/2020/06/cross-compiling-with-musl-toolchains</link>
      <pubDate>Mon, 22 Jun 2020 05:37:59 -0700</pubDate>
      
      <guid>https://ariya.io/2020/06/cross-compiling-with-musl-toolchains</guid>
      <description>&lt;p&gt;When working on command-line utilities which can be useful for various platforms, from Windows on x86 to Linux on MIPS, the existence of a cross-compilation is highly attractive. A number of different binaries can be constructed conveniently from a single, typically powerful host system.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://alpinelinux.org&#34;&gt;Alpine Linux&lt;/a&gt; popularizes the use of &lt;a href=&#34;https://musl.libc.org&#34;&gt;musl&lt;/a&gt; a no-frills C standard library for Linux. According to its website:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;musl is lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In addition, thanks to &lt;a href=&#34;https://zv.io&#34;&gt;Zach van Rijn&lt;/a&gt;, we have a collection of static toolchains based on musl at &lt;a href=&#34;https://musl.cc&#34;&gt;musl.cc&lt;/a&gt; at our disposal. The number of supported systems is rather mind blowing, you got everything from the usual i686 to MIPS to Microblaze and many others.
&lt;a href=&#34;https://github.com/ariya/fastlz/actions&#34;&gt;https://github.com/ariya/fastlz/actions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I search for a viable alternative to the cross-compilation method based on Dockcross (see my previous blog post: &lt;a href=&#34;https://ariya.io/2019/06/cross-compiling-with-docker-on-wsl-2/&#34;&gt;Cross Compiling with Docker on WSL 2&lt;/a&gt;), musl.cc fits the requirements nicely. I am in the process of migrating the &lt;a href=&#34;https://github.com/ariya/fastlz/actions&#34;&gt;continuous integration&lt;/a&gt; of FastLZ, my implementation of byte-aligned LZ77 compression algorithm, to be completely based on musl.cc.&lt;/p&gt;

&lt;p&gt;Here is a quick walkthrough. As long as you are on Linux x86-64, you can follow along easily (and yes, this also works great on &lt;a href=&#34;https://docs.microsoft.com/en-us/windows/wsl&#34;&gt;WSL&lt;/a&gt;, Windows Subsystems for Linux). As a reference, we will use the simplest ANSI C/C90 program available at &lt;a href=&#34;https://github.com/ariya/hello-c90&#34;&gt;github.com/ariya/hello-c90&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/06/crosscompiler.png&#34; width=&#34;584&#34; height=&#34;357&#34; alt=&#34;Cross compilation with musl Toolchains&#34;/&gt;&lt;/p&gt;

&lt;p&gt;First and foremost, we need &lt;a href=&#34;https://qemu.org&#34;&gt;QEMU&lt;/a&gt; so we can test our binaries not native to x86-64. For convenience, GNU Make is also necessary.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ sudo apt install -y qemu-users make
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After that, let us grab the Hello C90 program:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ git clone https://github.com/ariya/hello-c90.git
$ cd hello-c90
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For a start, let us try to produce MIPS64 binary of our little Hello C90 program. Thus, we ought to grab the toolchains first, weighing at about 90 MB.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ curl -O https://musl.cc/mips64-linux-musl-cross.tgz
$ tar xzf mips64-linux-musl-cross.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To ensure that this fresh cross-compiler works, do a quick sanity check:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ ./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc --version
mips64-linux-musl-gcc (GCC) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This looks good! Now we can compile our Hello C90 program statically:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ make CC=./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc LDFLAGS=-static
./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc -O -Wall -std=c90 -c hello.c
./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc -static -o hello hello.o
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Checking the resulting binary should give the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ file ./hello
./hello: ELF 64-bit MSB executable, MIPS, MIPS-III version 1 (SYSV), statically linked, not stripped
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It is exactly what we want! To run the executable:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ qemu-mips64 ./hello
Hello, world! From C90 with love...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, if you are doing this on WSL, or generally have a Windows machine available elsewhere, there is this fun activity of cross-compiling the above app for Windows, without the need for any Windows compiler and SDK. Same steps as before:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ curl -O https://musl.cc/x86_64-w64-mingw32-cross.tgz
$ tar xzf x86_64-w64-mingw32-cross.tgz
$ make CC=./x86_64-w64-mingw32-cross/bin/x86_64-w64-mingw32-gcc LDFLAGS=-static
$ file ./hello.exe
./hello.exe: PE32+ executable (console) x86-64, for MS Windows
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To really test it, just bring &lt;code&gt;hello.exe&lt;/code&gt; to Windows and it is going to run as expected.&lt;/p&gt;

&lt;p&gt;For more details and elaborated examples, check the collection of &lt;a href=&#34;https://github.com/ariya/hello-c90/tree/master/.github/workflows&#34;&gt;workflows YAML files&lt;/a&gt; of this Hello C program.&lt;/p&gt;

&lt;p&gt;Combined with the continuous integration system of your choice, whether it is DIY via Jenkins or using one of the many services out there (GitHub Actions, Azure Pipelines, Travis CI), creating binaries for various operating systems becomes easier than ever!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Nix Package Manager on Ubuntu or Debian</title>
      <link>https://ariya.io/2020/05/nix-package-manager-on-ubuntu-or-debian</link>
      <pubDate>Sat, 30 May 2020 20:11:45 -0700</pubDate>
      
      <guid>https://ariya.io/2020/05/nix-package-manager-on-ubuntu-or-debian</guid>
      <description>&lt;p&gt;Even though Ubuntu/Debian is equipped with its legendary powerful package manager, &lt;em&gt;dpkg&lt;/em&gt;, in some cases, it is still beneficial to take advantage of &lt;a href=&#34;https://nixos.org/nix&#34;&gt;Nix&lt;/a&gt;, a purely functional package manager.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&#34;https://nixos.org/nix/manual&#34;&gt;complete manual&lt;/a&gt; of Nix does a fantastic job on explaining how to install and use it. But for the impatients among you, here is a quick overview. Note that this also works well when using Ubuntu/Debian under WSL (&lt;a href=&#34;https://ubuntu.com/wsl&#34;&gt;Windows Subsystem for Linux&lt;/a&gt;, both the original and the newest WSL 2.&lt;/p&gt;

&lt;p&gt;&lt;img align=&#34;right&#34; src=&#34;https://ariya.io/images/2020/05/nix.png&#34; width=&#34;347&#34; alt=&#34;Nix on Debian&#34;/&gt;&lt;/p&gt;

&lt;p&gt;First, create the &lt;code&gt;/nix&lt;/code&gt; directory owned by you (this is the common &lt;a href=&#34;https://nixos.org/nix/manual/#sect-single-user-installation&#34;&gt;single-user installation&lt;/a&gt;):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ sudo mkdir /nix
$ sudo chown ariya /nix
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And then, run the installation script:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ sh &amp;lt;(curl -L https://nixos.org/nix/install) --no-daemon
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that if you use WSL 1, likely you will encounter some error such as:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SQLite database &#39;/nix/var/nix/db/db.sqlite&#39; is busy
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This a known &lt;a href=&#34;https://github.com/NixOS/nix/issues/2651&#34;&gt;issue&lt;/a&gt;, the workaround is to create a new file &lt;code&gt;~/.config/nix/nix.conf&lt;/code&gt; with the following content&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;sandbox = false
use-sqlite-wal = false
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;and repeat the previous step.&lt;/p&gt;

&lt;p&gt;If nothing goes wrong, the script will perform the installation. Grab a cup of tea while waiting for it!&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;downloading Nix 2.3.4 binary tarball for x86_64-linux
performing a single-user installation of Nix...
copying Nix to /nix/store......................................
replacing old &#39;nix-2.3.4&#39;
installing &#39;nix-2.3.4&#39;
unpacking channels...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that the last step (unpacking channels) can run for a very long time (no idea why, hope it will be fixed at some point). Just be patient.&lt;/p&gt;

&lt;p&gt;To check whether Nix is successfully installed, we use the &lt;em&gt;Hello, world&lt;/em&gt; tradition:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ nix-env -i hello
installing &#39;hello-2.10&#39;
these paths will be fetched (6.62 MiB download, 31.61 MiB unpacked):
/nix/store/9l6d9k9f0i9pnkfjkvsm7xicpzn4cv2c-libidn2-2.3.0
/nix/store/df15mgn0zsm6za1bkrbjd7ax1f75ycgf-hello-2.10
/nix/store/nwsn18fysga1n5s0bj4jp4wfwvlbx8b1-glibc-2.30
/nix/store/pgj5vsdly7n4rc8jax3x3sill06l44qp-libunistring-0.9.10
$ which hello
/home/ariya/.nix-profile/bin/hello
$ hello
Hello, world!
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the above illustration, &lt;code&gt;hello&lt;/code&gt; is a test package that does nothing but to display the famous message. It looks simple, and yet it is very useful!&lt;/p&gt;

&lt;p&gt;To get the feeling of packages available at your disposal (almost 29 thousands of them):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ nix-env -qa  &amp;gt; nix-packages.list
$ wc -l nix-packages.list
28974 nix-packages.list
$ less nix-package.list
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;While it is not a substitute for a large collection of existing Debian/Ubuntu packages, very often what you get from Nix is more up-to-date. For instance, if you are stuck with a typical Ubuntu 18.04 LTS, it offers git 2.17.1, tmux 2.6.3, jq 1.5, curl 7.58, and Neovim 0.2.2. But, with Nix on that same Ubuntu system, at the time of this writing, you can enjoy git 2.26.2, tmux 3.1b, jq 1.6, curl 7.69, and and Neovim 0.4.3.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2016/06/nixshell.png&#34; align=&#34;right&#34;/&gt;
The way I use Nix however is not merely as a mechanism to get fresher software and other utilities. Rather, the functional nature of Nix leads to the possibility of multiple working environment, with each distinctive set of applications and tools, and the ability of &lt;em&gt;switching cleanly&lt;/em&gt; between them. Those who use &lt;a href=&#34;https://github.com/nvm-sh/nvm&#34;&gt;nvm&lt;/a&gt; (for Node.js) or &lt;a href=&#34;https://virtualenv.pypa.io&#34;&gt;virtualenv&lt;/a&gt; (for Python) probably can appreciate this. Now, imagine nvm/virtualenv but not only for Node.js/Python, and rather applied to an arbitrary set of packages. I have covered this in details before in my previous blog post, &lt;a href=&#34;https://ariya.io/2016/06/isolated-development-environment-using-nix/&#34;&gt;Isolated Development Environment using Nix&lt;/a&gt;. That blog post talked about Nix on macOS but obviously the experience is very suited for Nix on Debian, Ubuntu, or any other Linux distributions for that matter.&lt;/p&gt;

&lt;p&gt;I hope this will inspire you to explore &lt;a href=&#34;https://nixos.org/nix&#34;&gt;Nix&lt;/a&gt; in depth!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Practical Testing of Firebase Projects</title>
      <link>https://ariya.io/2020/04/practical-testing-of-firebase-projects</link>
      <pubDate>Wed, 29 Apr 2020 12:10:04 -0700</pubDate>
      
      <guid>https://ariya.io/2020/04/practical-testing-of-firebase-projects</guid>
      <description>&lt;p&gt;Your little Firebase project is getting bigger every day? Never underestimate the need to establish a solid and firm integration tests from the get go.&lt;/p&gt;

&lt;p&gt;Once you start to utilize various features of Firebase, from &lt;a href=&#34;https://firebase.google.com/docs/hosting&#34;&gt;Hosting&lt;/a&gt;, &lt;a href=&#34;https://firebase.google.com/docs/functions&#34;&gt;Functions&lt;/a&gt;, and &lt;a href=&#34;https://firebase.google.com/docs/firestore/&#34;&gt;Firestore&lt;/a&gt;, it is imperative to incorporate practical local testing as soon as possible. Not only it will save you from some potential nightmares down the road, it can also facilitate faster iterations and quick(er) turn-around time during refactoring and feature implementation. Here is a few random suggestions to get you started. To follow along, you can also check the git repository containing the sample code at &lt;a href=&#34;https://github.com/ariya/hello-firebase-experiment&#34;&gt;github.com/ariya/hello-firebase-experiment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/04/hellofirebase.png&#34; width=&#34;80%&#34; alt=&#34;Hello Firebase project in Visual Studio Code editor&#34;/&gt;&lt;/p&gt;

&lt;p&gt;First thing that you always need to do is to implement a &lt;strong&gt;health check&lt;/strong&gt; functionality. The name could be as simple as &lt;code&gt;ping&lt;/code&gt;. Hence, inside your main Firebase Functions, there should be a block of code that looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-js&#34;&gt;exports.ping = functions.https.onRequest((request, response) =&amp;gt; {
    response.send(&#39;OK&#39;);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now if you want to be fancy, it does not hurt to show the timestamp (&lt;a href=&#34;https://www.epochconverter.com/&#34;&gt;Unix epoch&lt;/a&gt;) which can be valuable to know that this is not a cached or outdated HTTP response. If you wish, feel free to extend it with useful tidbits (but be careful not to reveal sensitive information).&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-js&#34;&gt;exports.ping = functions.https.onRequest((request, response) =&amp;gt; {
    response.send(`OK ${Date.now()}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In your test code (shown here with &lt;a href=&#34;https://www.npmjs.com/package/axios&#34;&gt;Axios&lt;/a&gt; to perform an HTTP request, but the concept applies to any library), do a quick sanity check that this &lt;code&gt;/ping&lt;/code&gt; is working. This is an important step towards a reliable &lt;strong&gt;local testing&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-js&#34;&gt;it(&#39;should have a working ping function&#39;, async function () {
    const res = await axios.get(&#39;http://localhost:5000/ping&#39;);
    const status = res.data.substr(0, 2);
    const timestamp = res.data.substr(3);
    expect(status).toEqual(&#39;OK&#39;);
    expect(timestamp).toMatch(/[0-9]+/);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, the test might fail miserably. If that is the case, you do not have the proper setup yet to use and run &lt;a href=&#34;https://firebase.google.com/docs/rules/emulator-setup&#34;&gt;Firebase emulators&lt;/a&gt;. Using npm, make sure to install all the following packages:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;firebase-tools
firebase-functions
firebase-functions-test
firebase-admin
@google-cloud/firestore
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And check that your &lt;code&gt;firebase.json&lt;/code&gt; looks like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-json&#34;&gt;{
  &amp;quot;hosting&amp;quot;: {
    &amp;quot;public&amp;quot;: &amp;quot;./public&amp;quot;
    &amp;quot;rewrites&amp;quot;: [
      {
        &amp;quot;source&amp;quot;: &amp;quot;/ping&amp;quot;,
        &amp;quot;function&amp;quot;: &amp;quot;ping&amp;quot;
      }
    ]
  },
  &amp;quot;emulators&amp;quot;: {
    &amp;quot;functions&amp;quot;: {
      &amp;quot;port&amp;quot;: 5001
    },
    &amp;quot;firestore&amp;quot;: {
      &amp;quot;port&amp;quot;: 8080
    },
    &amp;quot;hosting&amp;quot;: {
      &amp;quot;port&amp;quot;: 5000
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note the &lt;code&gt;rewrites&lt;/code&gt; section. This makes &lt;code&gt;/ping&lt;/code&gt; handily available from the main Firebase Hosting domain, instead of the long and cryptic one such as &lt;code&gt;us-central1-YOURFIREBASEPROJECT.cloudfunctions.net/ping&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Before running tests, make sure to launch the emulators for Functions, Firestore, and Hosting:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;npm run firebase -- emulators:start --project MYPROJECT
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the above command, &lt;code&gt;npm run firebase&lt;/code&gt; works because of the run script definition. Also, substitute the name of your Firebase project accordingly. If the setup is correct, your terminal should show something like:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;emulators: Starting emulators: functions, hosting
hub: emulator hub started at http://localhost:4400
functions: functions emulator started at http://localhost:5001
hosting: Serving hosting files from: ./
hosting: Local server: http://localhost:5000
hosting: hosting emulator started at http://localhost:5000
functions[ping]: http function initialized
emulators: All emulators started, it is now safe to connect.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, if you point your browser to &lt;code&gt;localhost:5000/ping&lt;/code&gt;, you should get the &lt;em&gt;OK&lt;/em&gt; message (followed by the number representing the timestamp as Unix epoch). Of course, running the full tests (&lt;code&gt;npm test&lt;/code&gt;) should also yield in a successful run.&lt;/p&gt;

&lt;p&gt;When setting up the tests for CI (continuous integration), it might be easier to &lt;strong&gt;let the emulators run the test automatically&lt;/strong&gt;. Here is how it is done:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;npm run firebase -- emulators:exec &amp;quot;npm test&amp;quot; --project MYPROJECT
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;exec&lt;/code&gt; option run the subsequent command, in this case the usual &lt;code&gt;npm test&lt;/code&gt;, after starting the emulators. Once the command is completed (whether successfully or not), the emulators are automatically terminated. This is &lt;a href=&#34;https://firebase.google.com/docs/emulator-suite/install_and_configure#integrate_with_your_ci_system&#34;&gt;perfect for the CI run&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Next trick on our sleeve: &lt;strong&gt;fixtures for Firestore&lt;/strong&gt;. Let us assume that your application uses this NoSQL datastore via this simple function for illustration (and do not forget to add a new URL rewrite for &lt;code&gt;/answer/&lt;/code&gt;):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-js&#34;&gt;admin.initializeApp(functions.config().firebase);
const db = admin.firestore();
exports.answer = functions.https.onRequest(async (request, response) =&amp;gt; {
    try {
        const doc = await db.collection(&#39;universe&#39;).doc(&#39;answer&#39;).get();
        const value = doc.data().value;
        console.log(`Answer is ${value}`);
        response.send(`Answer is ${value}`);
    } catch (err) {
        console.error(`Failed to obtain the answer: ${err.toString()}`);
        response.send(`EXCEPTION: ${err.toString()}`);
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And the corresponding test:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-js&#34;&gt;it(&#39;should give a proper answer&#39;, async function () {
    const res = await axios.get(&#39;http://localhost:5000/answer&#39;);
    const answer = res.data;
    expect(answer).toEqual(&#39;Answer is 42&#39;);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Launching the emulators (using the previous instructions) and running the tests however will result in a failure. And if you go to localhost:5000/answer, you fill discover an expected response:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;EXCEPTION: TypeError: Cannot read property &#39;value&#39; of undefined
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This should not come as a surprise. When Firebase Emulators launched, its database (for Firestore) is empty. Hence, there is still no proper document, let alone a collection. It will be unnecessarily tedious to populate the database (it works for this simple example but a real-world app might have tons of collections and documents). How do we prepare a fixture for this?&lt;/p&gt;

&lt;p&gt;Well, again the Firestore emulators to the rescue! While it is running, and you can perform another steps to populate the database (outside the scope of this blog post, perhaps we will discuss in some other time), you can &lt;a href=&#34;https://firebase.google.com/docs/emulator-suite/install_and_configure#export_and_import_emulator_data&#34;&gt;snapshot the database&lt;/a&gt; and save it as the test fixture:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;npm run firebase -- emulator:export spec/fixture --project MYPROJECT
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once the fixture is available, rerun the emulator (either as &lt;code&gt;start&lt;/code&gt; or through &lt;code&gt;exec&lt;/code&gt;) with the &lt;code&gt;import&lt;/code&gt; option and the Firestore database will not be empty anymore, as it is populated with the previous snapshot.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;npm run firebase -- emulators:start --import spec/fixture --project MYPROJECT
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Last but not least, let us run this test as &lt;strong&gt;an automation workflow&lt;/strong&gt; using &lt;a href=&#34;https://github.com/features/actions&#34;&gt;GitHub Actions&lt;/a&gt;. All you need is a file named &lt;code&gt;.github/workflow/test.yml&lt;/code&gt; with the following content:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;name: Tests
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Use Node.js
      uses: actions/setup-node@v1
      with:
        node-version: 10.x
    - run: npm ci
    - run: npm run firebase -- emulators:exec &amp;quot;npm test&amp;quot; --import spec/fixture
      env:
        CI: true
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As it turns out, it is not too difficult to set up some practical tests of a Firebase project!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Search Box and Cloud Function</title>
      <link>https://ariya.io/2020/03/search-box-and-cloud-function</link>
      <pubDate>Tue, 31 Mar 2020 23:45:57 -0700</pubDate>
      
      <guid>https://ariya.io/2020/03/search-box-and-cloud-function</guid>
      <description>&lt;p&gt;For a blog hosted with Firebase Hosting, it turns out that a little search box is fairly easy to implement by using Cloud Functions for Firebase.&lt;/p&gt;

&lt;p&gt;As with the current trend nowadays, this blog is a static site prepared with &lt;a href=&#34;http://gohugo.io/&#34;&gt;Hugo&lt;/a&gt; and deployed to &lt;a href=&#34;https://firebase.google.com/docs/hosting/&#34;&gt;Firebase&lt;/a&gt; (see my previous blog: &lt;a href=&#34;https://ariya.io/2017/05/static-site-with-hugo-and-firebase/&#34;&gt;Static Site with Hugo and Firebase&lt;/a&gt;). Some time ago, I realized that since I am using Firebase anyway, might as well take advantage of its &lt;a href=&#34;https://firebase.google.com/docs/functions/&#34;&gt;Cloud Functions&lt;/a&gt; to add a little search functionality to the blog, particularly for its &lt;a href=&#34;https://firebase.google.com/docs/hosting/full-config#404&#34;&gt;404 page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/03/searchbox.png&#34; width=&#34;50%&#34; alt=&#34;search box&#34;/&gt;&lt;/p&gt;

&lt;p&gt;Of course, I am cheating a little bit. Using the above search box actually just redirects the search to my favorite search engine, &lt;a href=&#34;https://duckduckgo.com&#34;&gt;DuckDuckGo&lt;/a&gt;, resulting in the following:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/03/duck.png&#34; width=&#34;50%&#34; alt=&#34;DuckDuckGo search&#34;/&gt;&lt;/p&gt;

&lt;p&gt;Implementing it is almost trivial. First, we need &lt;code&gt;index.js&lt;/code&gt; inside the &lt;code&gt;functions&lt;/code&gt; subdirectory with the content as short as this (obviously, for your blog, replace &lt;code&gt;site&lt;/code&gt; accordingly):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-javascript&#34;&gt;const functions = require(&#39;firebase-functions&#39;);
exports.search = functions.https.onRequest((request, response) =&amp;gt; {
  const q = request.query.q || &#39;&#39;;
  response.redirect(`https://duckduckgo.com/?q=site:ariya.io+${q}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once it is properly deployed, the trigger URL will be in the form of &lt;code&gt;us-central1-YOURFIREBASEPROJECT.cloudfunctions.net/search&lt;/code&gt;. This is rather ugly. To overcome that, set up a &lt;a href=&#34;https://firebase.google.com/docs/hosting/full-config#rewrites&#34;&gt;rewrite&lt;/a&gt; inside &lt;code&gt;firebase.json&lt;/code&gt; so that it looks something like:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
    &amp;quot;hosting&amp;quot;: {
      &amp;quot;rewrites&amp;quot;: [{
          &amp;quot;source&amp;quot; : &amp;quot;/search&amp;quot;,
          &amp;quot;function&amp;quot;: &amp;quot;search&amp;quot;
        }
      ]
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;and thus, the function is available as the top-level &lt;code&gt;/search&lt;/code&gt; of your Firebase Hosting URL, including if it is a custom domain.&lt;/p&gt;

&lt;p&gt;After this, inserting the search box is also equally fun:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-html&#34;&gt;&amp;lt;form action=&amp;quot;/search&amp;quot;&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;input type=&amp;quot;text&amp;quot; name=&amp;quot;q&amp;quot; required&amp;gt; &amp;lt;button type=&amp;quot;submit&amp;quot;&amp;gt;Search&amp;lt;/button&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/form&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;When a visitor uses the search, they will get redirected to DuckDuckGo and be presented with the search result. Fast and easy!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Automatic Merge of Pull Requests</title>
      <link>https://ariya.io/2020/02/automatic-merge-of-pull-requests</link>
      <pubDate>Sat, 29 Feb 2020 23:01:57 -0800</pubDate>
      
      <guid>https://ariya.io/2020/02/automatic-merge-of-pull-requests</guid>
      <description>&lt;p&gt;After using Azure DevOps for a while, I am totally sold on its Auto Complete feature for pull requests. While it does not apply universally, I do believe that any development process should be at the level where merging pull requests, or generalizing it, integrating all forms of contribution, should be as automatic and as hassle-free as possible.&lt;/p&gt;

&lt;p&gt;If you are not familiar yet with &lt;a href=&#34;https://azure.microsoft.com/en-us/services/devops&#34;&gt;Azure DevOps&lt;/a&gt;, it is basically a pay-as-you-go service for code repositories, automatic build runs, task tracker, artifact management, etc. Azure DevOps is pretty much comparable to various other similar services, such as GitHub, GitLab, Bitbucket, and many others. Note that although it bears the name Azure, you do &lt;em&gt;not&lt;/em&gt; need to use any other Azure services to be able to take advantage of Azure DevOps offering (similar to how you can use Google Maps but without the need to store your files at Google Drive or host your email with Gmail).&lt;/p&gt;

&lt;p&gt;One feature that makes Azure DevOps (at the time of this writing) unique compared to others is its ability to mark a PR (pull request) as &lt;em&gt;Auto Complete&lt;/em&gt;. To do this, go to the sidebar and choose &lt;em&gt;Branches&lt;/em&gt; (under &lt;em&gt;Repo&lt;/em&gt; menu group). Once the branch list is displayed, hover on e.g. &lt;em&gt;master&lt;/em&gt; and pick its context menu (rightmost three-dot menu) and choose &lt;em&gt;Branch policies&lt;/em&gt;. Pick some settings which suit your need. Make sure to customize the &lt;em&gt;Build validation&lt;/em&gt;, this is done by adding a simple build policy.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/02/autocomplete.png&#34; width=&#34;75%&#34; alt=&#34;Enable auto completion&#34;/&gt;&lt;/p&gt;

&lt;p&gt;Now, whenever you create a pull request, there is a noticeable blue button, &lt;em&gt;Set Auto complete&lt;/em&gt;, on the pull request page. Basically what it does is the automatic merging of the pull requests of two conditions are fulfilled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the pull request is approved (by one or more reviewers, per branch policy)&lt;/li&gt;
&lt;li&gt;the build succeeds, i.e. as configured with its continuous integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are also a few tweaks possible. For instance, you have the option to squash the branch, rebase and fast-forward, etc. Even better, there is an option to automatically delete the branch once it is merged, which can really help to reduce clutter.&lt;/p&gt;

&lt;p&gt;Removing the manual step of merging an approved pull request will eliminate one more thing than we, human being, need to be involved with. Who would not enjoy less amount of cognitive load? I hope other services such as GitHub, GitLab, Bitbucket, and many more will follow suit and implement the same feature!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Clang on Windows</title>
      <link>https://ariya.io/2020/01/clang-on-windows</link>
      <pubDate>Sun, 05 Jan 2020 14:46:09 -0800</pubDate>
      
      <guid>https://ariya.io/2020/01/clang-on-windows</guid>
      <description>&lt;p&gt;Thanks to the MSYS2 project, now there is an easy way to utilize Clang to build C/C++ application on Windows. This works equally well for both 32-bit and 64-bit programs.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://www.msys2.org/&#34;&gt;MSYS2&lt;/a&gt; is a fantastic (and better) reimagination of &lt;a href=&#34;https://www.cygwin.com/&#34;&gt;Cygwin&lt;/a&gt;, it is like taking the best part of a typical modern Unix environment (a familiar shell, a general collection of utilities, a porting layer, a package manager, and so on) while still working on Windows. Bootstrapping into MSYS2 is easy, either install it directly (using the GUI installer) or use &lt;a href=&#34;https://chocolatey.org/&#34;&gt;Chocolatey&lt;/a&gt;: &lt;code&gt;choco install msys2&lt;/code&gt;. Once inside its shell, &lt;code&gt;pacman&lt;/code&gt; is the go-to,  ever-so-powerful &lt;a href=&#34;https://github.com/msys2/msys2/wiki/Using-packages&#34;&gt;package manager&lt;/a&gt;, with thousands of packages available at your disposal.&lt;/p&gt;

&lt;p&gt;This of course, includes the toolchain. Not only the latest GCC is there, but we also have &lt;a href=&#34;https://clang.llvm.org/&#34;&gt;Clang&lt;/a&gt;! To illustrate the concept, let us go back to the simple ANSI C/C90 program covered in the &lt;a href=&#34;https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture&#34;&gt;previous blog post&lt;/a&gt;. Once we clone the repository, open MSYS2 32-bit shell and try the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;pacman -S msys/make mingw32/mingw-w64-i686-clang
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It is a simple step to install both Make and Clang. Wait a bit and after that, do the usual magic:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;CC=clang make
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;A caveat here, Clang for Windows does not append the &lt;code&gt;.exe&lt;/code&gt; suffix for the executable. Thus, a quick rename to the rescue:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ren hello hello.exe
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And now you can run, inspect, analyze the executable as usual.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2020/01/clang-msys2.png&#34; alt=&#34;Pipelines Clang on Windows&#34; /&gt;&lt;/p&gt;

&lt;p&gt;To incorporate it into the continuous integration using Azure Pipelines (again, see the &lt;a href=&#34;https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture&#34;&gt;previous blog post&lt;/a&gt;), we shall construct a new job. The basic step is as follows.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job: &#39;i686_windows_clang&#39;
  pool:
    vmImage: &#39;vs2017-win2016&#39;
  variables:
    PACMAN_PACKAGES: C:\tools\msys64\var\cache\pacman\pkg
    CC: clang
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;First, programmatically install MSYS2:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;  - script: choco install --no-progress msys2
    displayName: &#39;Install MSYS2&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After that, perform some pacman maintenances:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;  - script: |
      pacman -Sy
      pacman --noconfirm -S pacman-mirrors
    workingDirectory:  C:\tools\msys64\usr\bin\
    displayName: &#39;Check pacman&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And then, we install the required packages. At the time of this writing, Clang &lt;a href=&#34;http://releases.llvm.org/9.0.0/tools/clang/docs/&#34;&gt;version 9.0&lt;/a&gt; (the latest) will be installed.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;  - script:  pacman --noconfirm -S msys/make mingw64/mingw-w64-x86_64-clang
    workingDirectory: C:\tools\msys64\usr\bin\
    displayName: &#39;Install requirements&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For the x86 architecture (aka, 32-bit Intel/AMD), install a different package:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;  - script:  pacman --noconfirm -S msys/make mingw32/mingw-w64-i686-clang
    workingDirectory: C:\tools\msys64\usr\bin\
    displayName: &#39;Install requirements&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And now, down to the actual build step:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;  - script: |
      set PATH=C:\tools\msys64\usr\bin;C:\tools\msys64\mingw64\bin;%PATH%
      make
      ren hello hello.exe
    displayName: &#39;make&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As a minor tweak, we can also cache pacman downloaded packages. In the above example, it hardly matters since we only install Make and Clang. But if you have a larger application, e.g. requiring Python, Qt, and so on, it is wide to avoid the CI run redownloading the same packages again and again (saving bandwith, and also being nice to those mirrors). We can achieve this by using the &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/caching&#34;&gt;Cache task&lt;/a&gt; from Azure Pipelines. Simply insert this after MSYS2 installation step.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;  - task: Cache@2
    inputs:
      key: pacman
      restoreKeys: pacman
      path: $(PACMAN_PACKAGES)
    displayName: &#39;Cache pacman packages&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For the complete illustration of such a job, take a look at the actual &lt;a href=&#34;https://github.com/ariya/hello-c90/blob/master/azure-pipelines.yml&#34;&gt;azure-pipelines.yml&lt;/a&gt; for the &lt;a href=&#34;https://github.com/ariya/hello-c90&#34;&gt;hello-c90&lt;/a&gt; project.&lt;/p&gt;

&lt;p&gt;Clang everywhere, yay!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Continuous Integration of Vanilla C Programs for Intel, ARM, and MIPS Architecture</title>
      <link>https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture</link>
      <pubDate>Mon, 22 Jul 2019 15:33:34 -0700</pubDate>
      
      <guid>https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture</guid>
      <description>&lt;p&gt;Developing cross-platform applications presents a major challenge:, how to ensure that every commit does not break some combinations of operating systems and CPU architectures. Fortunately, thanks an array of online services and open-source tools, this challenge becomes easier to tackle.&lt;/p&gt;

&lt;p&gt;For this demo, I have the traditional &lt;em&gt;Hello, world&lt;/em&gt; program written in ANSI C/C90 at this repository: &lt;a href=&#34;https://github.com/ariya/hello-c90&#34;&gt;github.com/ariya/hello-c90&lt;/a&gt; (feel free to take a look). The objective is to verify its automatic build (for the purpose of continuous integration) for a number of different CPU architectures, operating systems, as well as the C/C++ compilers. Supported CPU architectures are (using &lt;a href=&#34;https://wiki.debian.org/SupportedArchitectures&#34;&gt;Debian nomenclatures&lt;/a&gt;) are amd64, i386, i686, armhf, arm64, and mips. Among some C/C++ compilers to be tested are &lt;a href=&#34;https://gcc.gnu.org/&#34;&gt;GCC&lt;/a&gt;, &lt;a href=&#34;https://clang.llvm.org/&#34;&gt;Clang&lt;/a&gt;, &lt;a href=&#34;https://bellard.org/tcc/&#34;&gt;TCC&lt;/a&gt;, &lt;a href=&#34;https://docs.microsoft.com/en-us/cpp&#34;&gt;Visual C/C++&lt;/a&gt; (as part of Visual Studio 2017 and also 2019), &lt;a href=&#34;http://www.smorgasbordet.com/pellesc/&#34;&gt;Pelles C&lt;/a&gt;, &lt;a href=&#34;https://digitalmars.com/&#34;&gt;Digital Mars&lt;/a&gt;, as well as &lt;a href=&#34;http://mingw.org/&#34;&gt;MinGW&lt;/a&gt;. Obviously, some combinations are not available. For instance, there is no such thing (at least, not yet) as Visual C/C++ for Linux or MinGW targeting macOS.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://ariya.io/images/2019/07/ci.png&#34; alt=&#34;Build jobs&#34; /&gt;&lt;/p&gt;

&lt;p&gt;In this particular blog post, we will use &lt;a href=&#34;https://azure.microsoft.com/en-us/services/devops/pipelines/&#34;&gt;Azure Pipelines&lt;/a&gt;, a hosted build system supporting all three major OS: Windows, macOS, and Linux. For the DIY among you, the same setup can be achieved by using something like &lt;a href=&#34;https://jenkins.io/&#34;&gt;Jenkins&lt;/a&gt;, &lt;a href=&#34;https://docs.gitlab.com/ce/ci/&#34;&gt;GitLab CI&lt;/a&gt;, &lt;a href=&#34;https://www.jetbrains.com/teamcity/&#34;&gt;TeamCity&lt;/a&gt;, and many other alternatives, along with the some build agents for the corresponding OS you want to tackle.&lt;/p&gt;

&lt;p&gt;The build itself is configured via the YAML file, &lt;code&gt;azure-pipelines.yml&lt;/code&gt;. There is a job for each unique combination of (Architecture, Operating System, Compiler). For example, &lt;code&gt;amd64_linux_gcc&lt;/code&gt; denotes the build job for binary for Linux on Intel/AMD 64-bit architecture, compiled using GCC. As for now, the total number of those jobs is 16.&lt;/p&gt;

&lt;p&gt;The obvious build job is something like this. It is running natively on the hosted agent of Azure Pipelines. We just need to make sure that the right compiler (GCC in this case) is installed. For Linux and macOS, this can be via the package manager, &lt;a href=&#34;https://wiki.debian.org/Apt&#34;&gt;apt&lt;/a&gt; and &lt;a href=&#34;https://brew.sh/&#34;&gt;Homebrew&lt;/a&gt;, respectively.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-Makefile&#34;&gt;- job: &#39;amd64_linux_gcc&#39;
  pool:
    vmImage: &#39;ubuntu-16.04&#39;
  steps:
  - script: sudo apt install -y make gcc
    displayName: &#39;Install requirements&#39;
  - script: gcc --version
    displayName: &#39;Verify tools version&#39;
  - script: CC=gcc make
    displayName: &#39;make&#39;
  - script: file ./hello
    displayName: &#39;Verify executable&#39;
  - script: ./hello
    displayName: &#39;Run&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On Windows however, there is no need to do that since the hosted Windows agent is already equipped with Visual Studio. However, because the build is carried out with a Makefile (more specifically, &lt;code&gt;Makefile.win&lt;/code&gt;), we need GNU Make which is installed via &lt;a href=&#34;https://chocolatey.org/&#34;&gt;Chocolatey&lt;/a&gt;. Note that a stage in the build job is verifying the executable (useful to know whether it is built correctly or not) using &lt;code&gt;file&lt;/code&gt; (Linux and macOS) or &lt;code&gt;dumpbin&lt;/code&gt; (Windows).&lt;/p&gt;

&lt;p&gt;For two special Windows compilers, &lt;a href=&#34;https://digitalmars.com/&#34;&gt;Digital Mars&lt;/a&gt; and &lt;a href=&#34;http://www.smorgasbordet.com/pellesc/&#34;&gt;Pelles C&lt;/a&gt; (the Windows flavor of a modified TCC), they need to be installed on the fly since they are not available on the Windows hosted agents. Digital Mars is installed with a little dance with &lt;code&gt;curl&lt;/code&gt; and &lt;code&gt;unzip&lt;/code&gt;. Meanwhile, Pelles C is readily available from Chocolatey.&lt;/p&gt;

&lt;p&gt;To target non-Intel CPU architectures, we need to use some cross compilers. Since hosted Linux agent of Azure Pipelines supports Docker, the easiest way to achieve this is to use a Docker-based cross compilation using &lt;a href=&#34;https://github.com/dockcross/dockcross&#34;&gt;dockcross&lt;/a&gt;. This is explained in-depth in my previous blog post, &lt;a href=&#34;https://ariya.io/2019/06/cross-compiling-with-docker-on-wsl-2&#34;&gt;Cross Compiling with Docker&lt;/a&gt;. One of such example is the following build job, for building for Linux running on ARM (32-bit). Note that since the resulting exectable is an ARM binary, we ought to use &lt;a href=&#34;https://www.qemu.org/&#34;&gt;QEMU&lt;/a&gt; to run it.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-Makefile&#34;&gt;- job: &#39;armhf_linux_gcc&#39;
  pool:
    vmImage: &#39;ubuntu-16.04&#39;
  steps:
  - script: sudo apt install -y qemu-user
    displayName: &#39;Install requirements&#39;
  - script: |
      git clone --depth 1 https://github.com/dockcross/dockcross.git
     cd dockcross
      docker run --rm dockcross/linux-armv7 &amp;gt; ./dockcross-linux-armv7
      chmod +x ./dockcross-linux-armv7
    displayName: &#39;Prepare Dockcross&#39;
  - script: ./dockcross/dockcross-linux-armv7 bash -c &#39;$CC --version&#39;
    displayName: &#39;Verify tools version&#39;
  - script: ./dockcross/dockcross-linux-armv7 make LDFLAGS=-static
    displayName: &#39;make&#39;
  - script: file ./hello
    displayName: &#39;Verify executable&#39;
  - script: qemu-arm ./hello
    displayName: &#39;Run&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The same approach using Docker and QEMU works well for other CPU architectures such as MIPS, ARM 64-bit, and in fact Intel x86. The last one is quite necessary, since the hosted  agent of Azure Pipelines is running in 64-bit mode. Thus, we use this virtualization layer (QEMU) to verify the correct execution of 32-bit binary.&lt;/p&gt;

&lt;p&gt;As an illustration, two examples for MingW are illustrated. The first, MinGW is installed on the Windows agent. This is self explanatory.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-Makefile&#34;&gt;- job: &#39;amd64_windows_mingw&#39;
  pool:
    vmImage: &#39;vs2017-win2016&#39;
  variables:
    CC: &#39;gcc&#39;
  steps:
  - script: choco install mingw --version 8.1.0
    displayName: &#39;install MinGW-w64&#39;
  - script: gcc --version
    displayName: &#39;Verify tools version&#39;
  - script: make
    displayName: &#39;make&#39;
  - script: file hello.exe
    displayName: &#39;Verify executable&#39;
  - script: hello.exe
    displayName: &#39;Run&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For the second example, MinGW is used in a cross compilation fashion. Again, we use the Docker-based dockcross to achieve this. The compiler (GCC) runs inside the Docker container on the hosted Linux agent, however it produces a Windows executable. How do we run the resulting executable? QEMU is not suitable here (since we still need to install or run Windows, remember the host is Linux). But, we have &lt;a href=&#34;https://www.winehq.org/&#34;&gt;WINE&lt;/a&gt; to the rescue!&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-Makefile&#34;&gt;- job: &#39;i386_windows_mingw_static&#39;
  pool:
    vmImage: &#39;ubuntu-16.04&#39;
  steps:
  - script: |
      git clone --depth 1 https://github.com/dockcross/dockcross.git
     cd dockcross
      docker run --rm dockcross/windows-static-x86 &amp;gt; ./dockcross-windows-static-x86
      chmod +x ./dockcross-windows-static-x86
    displayName: &#39;Prepare Dockcross&#39;
  - script: ./dockcross/dockcross-windows-static-x86 bash -c &#39;$CC --version&#39;
    displayName: &#39;Verify tools version&#39;
  - script: ./dockcross/dockcross-windows-static-x86 make
    displayName: &#39;make&#39;
  - script: file ./hello
    displayName: &#39;Verify executable&#39;
  - script: docker run -v $PWD:/app tianon/wine:32 bash -c &amp;quot;wine /app/hello&amp;quot;
    displayName: &#39;Run&#39;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In fact, to avoid the hassle of on-the-fly installation/configuration of WINE, we just use the Dockerized WINE.&lt;/p&gt;

&lt;p&gt;The whole ordeal of running 16 jobs will take anywhere from 5 minutes to 20 minutes. Obviously, if you are constrainted by the free tier of Azure Pipelines, you can purchase access to more hosted agents or attach your own build agents, which will definitely parallelize and speed things up.&lt;/p&gt;

&lt;p&gt;I hope that the idea outlined in this post will inspire to continue to work on more cross-platform apps. Of course, it does not have to be an application written in ANSI C. The concept can be applied to D, Go, Rust, and many other modern compilers.&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>