<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Matt Rickard]]></title><description><![CDATA[Matt Rickard]]></description><link>https://matt-rickard.com/</link><generator>mrick</generator><lastBuildDate>Thu, 12 Sep 2024 04:52:06 GMT</lastBuildDate><atom:link href="https://matt-rickard.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Pseudonyms in American History]]></title><description><![CDATA[Debates around the ratification of the Constitution and the early formation of the United States happened through pseudonymous authors. They often used names bo]]></description><link>https://matt-rickard.com/pseudonyms-in-american-history</link><guid isPermaLink="false">e8fc07e26497258abf796fd1dfd886c3</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Tue, 05 Dec 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>Debates around the ratification of the Constitution and the early formation of the United States happened through pseudonymous authors. They often used names borrowed from Greek or Roman History.</p>
<p>Why?</p>
<ul>
<li>Plausibly some protection against retaliation. However, most pseudonymous writing was quickly attributed to authors.</li>
<li>Power in names. The names weren’t chosen at random. Often, they called back to famous Romans who took part in the formation of the Roman Republic. Or others who were known for their virtue or principles.</li>
</ul>
<p>Alexander Hamilton might have written under the most pseudonyms (at least five). Benjamin Franklin used at least three. Here’s a list of some of the more popular ones around the time of the American Revolution.</p>
<p><strong>Phocion</strong> (Alexander Hamilton) — Essays defending the Jay Treaty with Great Britain. Phocion was an Athenian statesman known for his integrity and opposition to demagoguery.</p>
<p><strong>Columbus</strong> (Alexander Hamilton) — Defending the Continental Congress and criticizing British policies.</p>
<p><strong>Publius</strong> (Alexander Hamilton, James Madison, John Jay) — The authors of the Federalist Papers, which were a series of essays advocating for the ratification of the Constitution. Individual authorship wasn’t released until Hamilton’s death, and even then historians are still trying to match authors to text. It’s hypothesized that Hamilton wrote 51 essays, Madison 29, and Jay 5. Publius Valerius Poplicola was a Roman consul known for his role in founding the Roman Republic.</p>
<p><strong>Historicus</strong> (Alexander Hamilton) — Essays on various topics related to the Constitution and federalism.</p>
<p><strong>Pacificus</strong> (Alexander Hamilton) — Used to defend President George Washington's Neutrality Proclamation of 1793 (declared the U.S. neutral in the conflict between France and Great Britain). “Making peace” in Latin.</p>
<p><strong>Helvidius</strong> (James Madison) — Written in response to Pacificus (Hamilton), these essays defended the constitutional authority of Congress in foreign affairs. Helvidius Priscus was a Roman senator known for his defense of republicanism and freedom of speech.</p>
<p><strong>Americanus</strong> (John Jay, John Stevens, Jr.) — Federalists essays.</p>
<p><strong>Candidus</strong> (Benjamin Franklin) — Writings advocating for various causes, including opposition to oppressive British policies.</p>
<p><a href="/silence-dogood-and-the-ben-franklin-effect"><strong>Silence Dogood</strong></a> (Benjamin Franklin) — A fictitious widow created by Franklin to offer social commentary.</p>
<p><strong>Richard Saunders “Poor Richard”</strong> (Benjamin Franklin) — Used to publish <em>Poor Richard’s Almanack</em>. The name comes from a popular London almanac, <em>Rider’s British Merlin</em>.</p>
<p><strong>“Common Sense” —</strong> Thomas Paine’s pamphlet advocating for American independence was initially published anonymously.</p>
<p><a href="/cincinnatus"><strong>Cincinnatus</strong></a> <strong>(Arthur Lee) —</strong> Anti-federalist papers.</p>
<p><strong>A Farmer</strong> <strong>(John Dickinson)</strong> — Essays titled "Letters from a Farmer in Pennsylvania," which argued against the Townshend Acts imposed by the British.</p>
<p><strong>Cato</strong> (George Clinton) — Anti-federalist essays around the time of the ratification of the Constitution. Attributed to George Clinton, but not confirmed. Cato the Younger was a Roman statesman known for his staunch republicanism and opposition to Julius Caesar.</p>
<p><strong>Brutus</strong> (Robert Yates) — An ally of George Clinton’s who wrote more anti-federalist essays. Marcus Junius Brutus was a Roman senator famous for his role in the assassination of Julius Caesar, symbolizing resistance to tyranny.</p>
<p><strong>Centinel</strong> (Samuel Bryan) — A series of anti-federalist essays critical of the proposed U.S. Constitution's centralizing tendencies.</p>
<p><strong>Americanus</strong> (John Stevens, Jr.) — Essays written to support the Federalist cause and the ratification of the U.S. Constitution.</p>
<p><strong>Poplicola</strong> (John Adams) — Essays defending the British constitution and criticizing the Stamp Act. The same Publius Valerius Poplicola used by Hamilton.</p>
<p><strong>Novanglus</strong> (John Adams) — A series of essays written in response to Massachusettensis, defending colonial rights. Latinization of “New Englander”.</p>
<p><strong>A Citizen of New York</strong> (Martin Van Buren) — political essays.</p>]]></content:encoded></item><item><title><![CDATA[Fairchildren]]></title><description><![CDATA[In 1956, William Shockley, Stanford professor and winner of the Nobel Prize in Physics for his work on semiconductors, recruited a team of young Ph.D. graduates]]></description><link>https://matt-rickard.com/fairchildren</link><guid isPermaLink="false">3e18ce4e909b25c798151e08c5c45f77</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Mon, 04 Dec 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>In 1956, William Shockley, Stanford professor and winner of the Nobel Prize in Physics for his work on semiconductors, recruited a team of young Ph.D. graduates to product a new company. The company would be called Shockley Semiconductor.</p>
<p>But Shockley was a terrible manager, and the students left to form their own company the next year, Fairchild Semiconductor. They would be later known as the “traitorous eight”.</p>
<p>The founders of Fairchild Semiconductor were: Gordon Moore, C. Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last.</p>
<p>Fairchild Semiconductor became the proto-company of Silicon Valley. Many major technology companies can somehow trace their founding or story to Fairchild.</p>
<p><strong>Intel</strong> - Founded by Robert Noyce and Gordon Moore, both former employees of Fairchild Semiconductor.</p>
<p><strong>AMD (Advanced Micro Devices)</strong> - Founded by Jerry Sanders, another Fairchild alumnus.</p>
<p><strong>Kleiner Perkins</strong> - A venture capital firm co-founded by Eugene Kleiner, a former Fairchild employee.</p>
<p><strong>Sequoia Capital—</strong> Don Valentine worked at Fairchild Semiconductor for seven years before moving to National Semiconductor (another Fairchild). Then, he started Sequoia Capital.</p>
<p>Other companies founded by Fairchild employees: SanDisk, National Semiconductor, Altera, LSI Logic, Amelco, Applied Materials, and more.</p>]]></content:encoded></item><item><title><![CDATA[ChatGPT After One Year]]></title><description><![CDATA[ChatGPT was released on November 30th 2022. What has changed since then?

*   **Hundreds of open-source models.** Varying sized models from small to very large.]]></description><link>https://matt-rickard.com/chatgpt-after-one-year</link><guid isPermaLink="false">8758d57dd36ad484b296760e964a3138</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Sun, 03 Dec 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>ChatGPT was released on November 30th 2022. What has changed since then?</p>
<ul>
<li><strong>Hundreds of open-source models.</strong> Varying sized models from small to very large. Many are chat-tuned similar to ChatGPT.</li>
<li><strong>Distilled models from ChatGPT.</strong> Academics and competitors both used data from ChatGPT conversations to train or fine-tune their own models.</li>
<li><strong>Competition.</strong> Microsoft launched Bing Chat. Google launched Bard. Poe, Pi, Perplexity. Claude by Anthropic. Not to mention self-hosted open-source chat UIs and other wrappers. There’s no shortage of competition (although ChatGPT still is the most popular).</li>
<li><strong>RAG is hard.</strong> “Browse with Bing” and Bing Chat launched but hallucinations are still an issue. Browsing the internet doesn’t seem like the catch-all</li>
<li><strong>Not every launch increased performance across the board.</strong> Every new iteration of ChatGPT launched changed the way the model behaved. Many queries got better. Some got worse. Google has always had this problem as well, but applications aren’t build on Google.</li>
<li><strong>A consumer subscription model.</strong> ChatGPT Plus was released in February 2023. The consumer model maybe competes with the developer and enterprise products (why not just use the API?).</li>
<li><strong>Multi-modal.</strong> ChatGPT started to accept images and files in the chat. DALL-E and the vision API became integrated into the chat window. There are open-source models that are multi-modal, but so far no experience is as sleek as OpenAI’s.</li>
<li><a href="/chatgpt-plugins-dont-have-pmf"><strong>Plugins launched but never found product-market fit</strong></a><strong>.</strong> Plugins launched but didn’t become the <a href="/necessary-conditions-for-an-app-store-monopoly">App Store</a> that OpenAI hoped. Custom GPTs seem to be the next strategy for extensibility, although they won’t launch until next year.</li>
<li><strong>Code Interpreter is getting better.</strong> Agents and tool-use is still hard for LLMs. But it’s getting better and becoming more useful. Files can now be added directly to the UI to chat with.</li>
</ul>]]></content:encoded></item><item><title><![CDATA[McNamara Fallacy]]></title><description><![CDATA[The McNamara Fallacy is named after Robert McNamara, the US Secretary of Defense during the Vietnam War. The fallacy describes making decisions using only quant]]></description><link>https://matt-rickard.com/mcnamara-fallacy</link><guid isPermaLink="false">eb2102f1d65a9c3bdfa48d26e92d4186</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Sat, 02 Dec 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>The McNamara Fallacy is named after Robert McNamara, the US Secretary of Defense during the Vietnam War. The fallacy describes making decisions using only quantitative metrics and ignoring anything else.</p>
<p>The fallacy usually follows the same four steps.</p>
<ol>
<li>Measure what can easily be measured.</li>
<li>Dismiss what can’t be measured easily.</li>
<li>Presume what can’t be measured easily isn’t important.</li>
<li>Extrapolate and conclude that what can’t be measured doesn’t exist.</li>
</ol>
<p>You can find the McNamara Fallacy in all types of disciplines. The emphasis on standardized tests in education (at the expense of less quantifiable qualities and learning). Or when the success of treatments in medicine is based only on easy to measure outcomes (not quality of life, mental health, or overall well-being). Or optimizing for short-term financial metrics at the expense of brand reputation, employee satisfaction, or other intangibles.</p>]]></content:encoded></item><item><title><![CDATA[Data Quality in LLMs]]></title><description><![CDATA[Good data is the difference between Mistral’s LLMs and Llama, which share similar architectures but different datasets.

To train LLMs, you need data that is:

]]></description><link>https://matt-rickard.com/data-quality-in-llms</link><guid isPermaLink="false">8c321aca02314bd1fd41009cbd5ada31</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Fri, 01 Dec 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>Good data is the difference between Mistral’s LLMs and Llama, which share similar architectures but different datasets.</p>
<p>To train LLMs, you need data that is:</p>
<ol>
<li><strong>Large</strong> — Sufficiently large LMs require trillions of tokens.</li>
<li><strong>Clean</strong> — Noisy data reduces performance.</li>
<li><strong>Diverse</strong> — Data should come from different sources and different knowledge bases.</li>
</ol>
<p><em>What does clean data look like?</em></p>
<p>You can de-duplicate data with simple heuristics. The most basic would be removing any exact duplicates at the document, paragraph, or line level. More advanced versions might look at the data semantically, figuring out what data should be omitted because it’s better represented with higher quality data.</p>
<p>The other dimension of clean data is converting various file types to <a href="/good-enough-abstractions">something easily consumed by the LLM, usually markdown</a>. That’s why we’ve seen projects like <a href="https://github.com/facebookresearch/nougat">nougat</a> and <a href="https://github.com/clovaai/donut">donut</a> convert PDFs, books, and LaTeX to better formats for LLMs. There’s a lot of training data that’s still stuck in PDFs and human-readable but not so easily machine-readable data.</p>
<p><em>Where does diverse data come from?</em></p>
<p>The surprising result of the success of the GPTs is that web text from the Internet is probably one of the most diverse datasets out there. It contains usage and data that aren’t found in many other data corpora. That’s why models tend to perform so much better when they’re given more data from the web.</p>]]></content:encoded></item><item><title><![CDATA[Discord and AI GTM]]></title><description><![CDATA[Midjourney is the largest discord server, with 16.5 million total users. It accounts for 13% of total Discord invites. Midjourney launched in March 2022 and doe]]></description><link>https://matt-rickard.com/discord-and-ai-gtm</link><guid isPermaLink="false">b5f92453f96d30f537c4cd759f63eb90</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Thu, 30 Nov 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>Midjourney is the largest discord server, with 16.5 million total users. It accounts for 13% of total Discord invites. Midjourney launched in March 2022 and doesn’t have a web application. Many other AI apps (Leonardo, Pika, Suno, And AI Hub) are on Discord (or even Discord-only).</p>
<p>Why is Discord such a good GTM for AI applications?</p>
<ul>
<li><strong>Text interface.</strong> Most users are just generating images, videos, and audio in these Discord servers. Prompts are easily expressible in simple text commands. It’s why we’ve seen image generation strategies like Midjourney (all-in-one) flourish in Discord while more raw diffusion models haven’t grown as quickly (e.g., Stable Diffusion with many configurable parameters).</li>
<li><strong>Virality.</strong> Prompt engineering models is difficult and more art than science (today). Users can see generations by other users and collectively see what’s working and what isn’t. This means that these communities often have the most advanced prompts and best images.</li>
<li><strong>Low friction.</strong> Go to where your users already are. Most developers have Discord now. One fewer application to sign up for.</li>
<li><strong>Free hosting.</strong> Discord pays for the image hosting and bandwidth. At Midjourney scale, this is not negligible.</li>
</ul>
<p>But Discord has it’s risks as a platform to build on.</p>
<ul>
<li><strong>Platform risk.</strong> Discord could (easily?) build its own Midjourney-type application into the platform. Using all of the prompt-image pairs (along with reactions as a RLHF), it could probably distill a much better model from Midjourney (questionably legal but technically easy). This reminds me of the Zynga / Facebook relationship. <a href="/growth-hacking-platforms">Zynga accounted for 19% of Facebook’s revenue at one point.</a> Facebook reduced Zynga’s API access and launched its own gaming platform.</li>
<li><strong>Multi-modal.</strong> How does multi-modal fit into the Discord text-first interface? Sure there are images and audio that can be uploaded via the interface, <a href="/multi-modal-ai-is-a-ux-problem">but it’s hard to image the UI that a multi-modal AI will need in the future.</a></li>
</ul>]]></content:encoded></item><item><title><![CDATA[Standard Causes of Human Misjudgment (Munger)]]></title><description><![CDATA[In 1995, Charlie Munger gave a speech at Harvard on [_The Psychology of Human Misjudgment_](https://www.youtube.com/watch?v=Jv7sLrON7QY)_._ It was filled with t]]></description><link>https://matt-rickard.com/standard-causes-of-human-misjudgment-munger</link><guid isPermaLink="false">93482a259760ff5774be879cc5300d32</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Wed, 29 Nov 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>In 1995, Charlie Munger gave a speech at Harvard on <a href="https://www.youtube.com/watch?v=Jv7sLrON7QY"><em>The Psychology of Human Misjudgment</em></a><em>.</em> It was filled with the research he had done later in life on human psychology, matched with real-life examples that he had observed in his work. The result was a succinct list of the top cognitive biases grounded in real-life experiences. I’ve summarized the biases here, but it’s worth giving the entire speech a listen to hear the stories behind each. I’ve tried to keep Charlie’s language and numbering when possible.</p>
<ol>
<li><strong>Underestimation of Incentives:</strong> Despite understanding the significant influence of incentives (reinforcement in psychology and incentives in economics), there's a tendency to consistently underestimate their power.</li>
<li><strong>Psychological Denial:</strong> This is the refusal to accept reality because it is too painful or difficult to bear.</li>
<li><strong>Incentive-Cause Bias:</strong> This occurs when personal incentives or those of a trusted advisor create a conflict of interest, leading to biased decisions.</li>
<li><strong>Bias from Consistency and Commitment:</strong> This involves a strong tendency to stick to pre-existing beliefs or commitments, even in the face of contradictory evidence.</li>
<li><strong>Bias from Pavlovian Association:</strong> This bias refers to the error of basing decisions on past associations or correlations without considering their current relevance or accuracy.</li>
<li><strong>Bias from Reciprocation Tendency:</strong> This bias involves a natural inclination to reciprocate actions and behaviors, including conforming to others' expectations, especially when one is experiencing success or is 'on a roll.'</li>
<li><strong>Bias from Over-Influence by Social Proof:</strong> This bias refers to the heavy reliance on the actions or decisions of others, especially in situations of uncertainty or stress.</li>
<li><strong>Bias from Favoring Elegance over Practicality in Theory:</strong> This bias involves a preference for theories or explanations that are mathematically elegant or intellectually satisfying, even if they are less accurate in practical terms. “Better to be roughly right than precisely wrong” — Keynes.</li>
<li><strong>Bias from Contrast-Induced Distortions:</strong> This bias refers to the way our perceptions, sensations, and cognition can be significantly altered by contrasts.</li>
<li><strong>Bias from Over-Influence by Authority:</strong> This bias involves the tendency to conform to instructions or opinions provided by an authority figure, even when these instructions conflict with one's own moral judgment or common sense.</li>
<li><strong>Bias from Deprival Super Reaction Syndrome:</strong> This bias is characterized by an intense reaction to losing or the threat of losing something, especially something that one perceives as almost possessed but never fully owned.</li>
<li><strong>Bias from Deprival Super Reaction Syndrome:</strong> This bias is characterized by an intense reaction to losing or the threat of losing something, especially something that one perceives as almost possessed but never fully owned.</li>
<li><strong>Bias from Envy/Jealousy:</strong> This bias stems from feelings of envy or jealousy towards others.</li>
<li><strong>Bias from Chemical Dependency:</strong> This bias relates to the cognitive and behavioral changes that result from chemical dependency, such as addiction to drugs or alcohol.</li>
<li><strong>Bias from Gambling Compulsion:</strong> This bias refers to the compulsive urge to gamble, driven by the psychological principle of variable reinforcement.</li>
<li><strong>Bias from Liking Distortion:</strong> This bias involves a preference for things that are familiar or similar to oneself, including one's own ideas, kind, and identity.</li>
<li><strong>Bias from Disliking Distortion:</strong> This is the opposite of liking distortion, where there's a tendency to reject or not learn from sources that are disliked.</li>
<li><strong>Bias from the Non-Mathematical Nature of the Human Brain in Probability Assessment:</strong> This bias refers to the human brain's tendency to rely on crude heuristics and be easily misled by contrasts when dealing with probabilities, rather than using precise mathematical approaches.</li>
<li><strong>Bias from Over-Influence by Extra Vivid Evidence:</strong> This bias describes the tendency to give disproportionate weight to particularly vivid or emotionally striking information when making decisions.</li>
<li><strong>Stress-induced mental changes, small and large, temporary and permanent.</strong></li>
<li><strong>Mental Confusion from Poorly Structured Information and Inadequate Explanations:</strong> This bias involves difficulties in understanding or decision-making due to information that is not well-organized or lacks a coherent theoretical framework.</li>
</ol>]]></content:encoded></item><item><title><![CDATA[The Unreasonable Effectiveness of Monte Carlo]]></title><description><![CDATA[Monte Carlo methods are used in almost every branch of science: to evaluate risk in finance, to generate realistic lighting and shadows in 3D graphics, to do re]]></description><link>https://matt-rickard.com/the-unreasonable-effectiveness-of-monte-carlo</link><guid isPermaLink="false">2149f91f7dabaa67cca42e77aedcf192</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Tue, 28 Nov 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>Monte Carlo methods are used in almost every branch of science: to evaluate risk in finance, to generate realistic lighting and shadows in 3D graphics, to do reinforcement learning, to forecast weather, and to solve complex game theory games.</p>
<p>There are many types of Monte Carlo Methods, but they all follow a general pattern — using random sampling to model complex systems.</p>
<p><strong>A simple example:</strong> Imagine a complex shape you want to know the area of.</p>
<ol>
<li>Place the shape on a dartboard.</li>
<li>Randomly throw darts at the dartboard.</li>
<li>Count the number of darts that are inside the shape and outside.</li>
<li>The estimated area of the shape is = (number of darts in shape / number of darts outside of shape) * the area of the dartboard.</li>
</ol>
<p>(This is computing a definite integral numerically with a method that doesn’t depend on the dimensions! You can even easily estimate the error given the number of samples).</p>
<p><strong>Monte Carlo Tree Search (MCTS).</strong> Or use it to play a game like Blackjack (Chess, Go, Scrabble, and many other turn-based games) with Monte Carlo Tree Search. AlphaGo and its predecessors (AlphaGo Zero and AlphaZero) used versions of Monte Carlo Tree Search with reinforcement learning and deep learning.</p>
<p>The idea is fairly simple — add a policy (i.e., a strategy to follow) to the random sampling process. You might start with a simple one (random or stay with a hand under 18). For every move in a game, add that to a tree that describes the game. For Blackjack, that might be a series of hits or stays. When a game is won or lost, go back and update all of the nodes in the tree for that game (the “back propagation”).</p>
<p>After many games, you have a tree of expected utility for each move — that means you can sample the next move much more effectively. The value says something like — “given this current hand and set of actions, I won X% of the time”. You can get more advanced with the reward and update function — for example, you might discount wins that take many turns and prioritize quicker wins.</p>]]></content:encoded></item><item><title><![CDATA[Razor and Blades Model]]></title><description><![CDATA[The profit margin on Keurig machines is very low and sometimes even negative. On the other hand, the K-cup coffee pods have much higher profit margins.

The bus]]></description><link>https://matt-rickard.com/razor-and-blades-model</link><guid isPermaLink="false">e21d67bef257013c1fc131c3e995ef75</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Mon, 27 Nov 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>The profit margin on Keurig machines is very low and sometimes even negative. On the other hand, the K-cup coffee pods have much higher profit margins.</p>
<p>The business model: sell one item at break-even or for free to increase the sales of the complementary good. This is the “razor and blades” model. (Despite being named after the safety razor industry, early companies like Gillette didn’t initially follow this model).</p>
<p>This model works especially well when there are <a href="/the-dynamics-of-switching-costs">switching costs or vendor-lock in</a>. If there are no switching costs, other providers can come in and compete margins away from the complementary good. When the K-cup patent expired in 2012, prices came down when competitors started producing compatible pods.</p>
<p>Or when a producer owns a monopoly on the complementary good. John D. Rockefeller and Standard Oil gave away eight million kerosene lamps. Demand for kerosene (conveniently sold by Standard Oil) skyrocketed.</p>
<p>Some other examples of the razor and blades model:</p>
<ul>
<li>Kindle e-reader / digital books.</li>
<li>Video game console / video games</li>
<li>Mobile phone / cellular data plan</li>
<li>Electric toothbrush / replacement brush heads</li>
<li>Printers / ink cartridges</li>
<li>E-cigarettes / e-cigarette pods</li>
</ul>]]></content:encoded></item><item><title><![CDATA[Drawbacks of Moving to the Edge]]></title><description><![CDATA[Edge runtimes are often lauded as a fix to all latency concerns. But sometimes, moving to the edge can increase latency.

The problem: databases are still regio]]></description><link>https://matt-rickard.com/drawbacks-of-moving-to-the-edge</link><guid isPermaLink="false">bf157c8c1b784174e81dc1f89d60696f</guid><dc:creator><![CDATA[Matt Rickard]]></dc:creator><pubDate>Sun, 26 Nov 2023 14:30:00 GMT</pubDate><content:encoded><![CDATA[<p>Edge runtimes are often lauded as a fix to all latency concerns. But sometimes, moving to the edge can increase latency.</p>
<p>The problem: databases are still regional. If you move your application logic closer to the user via edge functions in multiple regions, this most likely increases the distance between your application and your database. Since the latter is often more chatty (more data sent back and forth between the application and database than the user and the application), this usually increases latency.</p>
<p><strong>Could you make data multi-regional?</strong> Sort of. There’s so work being done to bring the database to the edge (<a href="/sqlite-renaissance">see distributed SQLite</a>), but now with stateful data at the edge, you have a complicated distributed systems problem.</p>
<p><strong>Smarter caching?</strong> There’s also some work being done in application frameworks to do smarter caching (e.g., stale-while-revalidate) so that users get fast responses for most of the application while new data is rehydrated.</p>]]></content:encoded></item></channel></rss>