<?xml version='1.0' encoding='UTF-8'?>
<?xml-stylesheet type="text/xsl" href="/static/feed.xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <id>https://sajarin.com/blog</id>
  <title>Posts</title>
  <updated>2026-04-05T01:21:03.147348+00:00</updated>
  <author>
    <name>blog</name>
    <email>hidden</email>
  </author>
  <link href="https://sajarin.com/blog/" rel="alternate"/>
  <link href="https://sajarin.com/blog/feed/" rel="self"/>
  <generator uri="https://lkiesow.github.io/python-feedgen" version="0.9.0">python-feedgen</generator>
  <subtitle>thoughtful screams into the void</subtitle>
  <entry>
    <id>https://sajarin.com/blog/this-lfe-proves-me-human/</id>
    <title>this LFE proves me human</title>
    <updated>2026-03-08T09:33:18.984610+00:00</updated>
    <author>
      <name>blog</name>
      <email>hidden</email>
    </author>
    <content type="html">&lt;p&gt;&lt;style&gt;&#13;
.highlight { background: transparent !important; }&#13;
.highlight pre { background: transparent !important; }&#13;
.highlight .c1 {&#13;
  color: #8b9cb5;&#13;
  font-style: italic;&#13;
  font-family: 'Fraunces', Georgia, serif;&#13;
  font-optical-sizing: auto;&#13;
  font-size: 14.5px;&#13;
  line-height: 2;&#13;
  letter-spacing: 0.01em;&#13;
}&#13;
.highlight .k  { color: #c792ea; font-weight: normal; }&#13;
.highlight .nf { color: #82aaff; }&#13;
.highlight .nv { color: #e8a87c; }&#13;
.highlight .nb { color: #89ddff; }&#13;
.highlight .ss { color: #c3e88d; }&#13;
.highlight .o  { color: #89ddff; }&#13;
.highlight .p  { color: #4a5568; }&#13;
.highlight .w  { color: inherit; }&#13;
.highlight .mi { color: #f78c6c; }&#13;
article pre {&#13;
  padding: 32px 28px;&#13;
  font-size: 13px;&#13;
  line-height: 1.7;&#13;
  border: 1px solid rgba(255, 255, 255, 0.06);&#13;
  background: rgba(0, 0, 0, 0.5) !important;&#13;
  box-shadow:&#13;
    0 4px 24px rgba(0, 0, 0, 0.4),&#13;
    inset 0 1px 0 rgba(255, 255, 255, 0.03);&#13;
}&#13;
article pre::before { background: none; }&#13;
&lt;/style&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="c1"&gt;;;; this lisp flavored erlang proves me human&lt;/span&gt;
&lt;span class="c1"&gt;;;;&lt;/span&gt;
&lt;span class="c1"&gt;;;; after will keleher&amp;#39;s &amp;quot;this css proves me human&amp;quot;&lt;/span&gt;
&lt;span class="c1"&gt;;;; https://will-keleher.com/posts/this-css-makes-me-human/&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defmodule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;human&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;prove&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;

&lt;span class="c1"&gt;;;; --- the first wound: i pattern match on myself ---&lt;/span&gt;

&lt;span class="c1"&gt;;; a machine would optimize this away.&lt;/span&gt;
&lt;span class="c1"&gt;;; i keep it because the failure clause&lt;/span&gt;
&lt;span class="c1"&gt;;; is the only honest part.&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defun&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;prove&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;human&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;receive&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;are-you-sure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;no&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;prove-it&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;i-cant&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;after&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;i-got-distracted&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;machine&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;would&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;never&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;hesitate&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="k"&gt;_&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;dont&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;know&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;what&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;am&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;that&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nv"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;most&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;human&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;answer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;have&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;

&lt;span class="c1"&gt;;;; --- the second wound: i spawn and forget ---&lt;/span&gt;

&lt;span class="c1"&gt;;; erlang says let it crash.&lt;/span&gt;
&lt;span class="c1"&gt;;; i say: i have been letting it crash&lt;/span&gt;
&lt;span class="c1"&gt;;; my whole life. the supervisor restarts me&lt;/span&gt;
&lt;span class="c1"&gt;;; every morning at 6am with coffee&lt;/span&gt;
&lt;span class="c1"&gt;;; and a mass of undifferentiated dread.&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defun&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;morning&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;spawn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;receive&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;alarm&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;lists:foreach&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;task&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;task&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;check-email&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;but&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;really&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;it&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;open-terminal&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;stare&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;at&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ten&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;minutes&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;write-code&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;delete&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;lunch&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;_&lt;/span&gt;&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;get&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;distracted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;by&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;bird&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;check-email&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;open-terminal&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;write-code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;exist&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;after&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;28800000&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;; 8 hours in milliseconds&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;did&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;day&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;go&lt;/span&gt;&lt;span class="p"&gt;))))))&lt;/span&gt;

&lt;span class="c1"&gt;;;; --- the third wound: message passing ---&lt;/span&gt;

&lt;span class="c1"&gt;;; machines pass messages with purpose.&lt;/span&gt;
&lt;span class="c1"&gt;;; i pass messages like this:&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defun&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;conversation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nf"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;spawn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;receive&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;msg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;))))))&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;!&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;hey&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;did&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;see&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;thing&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;;; i do not wait for a reply.&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;;; i send another message.&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;!&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;never&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mind&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;was&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;nothing&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;;; the first message is now irrelevant.&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;;; this is how all my conversations go.&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;ok&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;;;; --- the fourth wound: shared nothing ---&lt;/span&gt;

&lt;span class="c1"&gt;;; erlang processes share no memory.&lt;/span&gt;
&lt;span class="c1"&gt;;; &amp;quot;shared nothing&amp;quot; the documentation calls it.&lt;/span&gt;
&lt;span class="c1"&gt;;; i have been calling it &amp;quot;being alive.&amp;quot;&lt;/span&gt;
&lt;span class="c1"&gt;;;&lt;/span&gt;
&lt;span class="c1"&gt;;; i can never read your internal state.&lt;/span&gt;
&lt;span class="c1"&gt;;; i can only observe the messages you send me&lt;/span&gt;
&lt;span class="c1"&gt;;; and hope they correspond to something real.&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defun&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;loneliness&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nf"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;spawn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nf"&gt;state&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;things&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;will&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;never&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;tell&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;                 &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;receive&lt;/span&gt;
&lt;span class="w"&gt;                   &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;how-are-you&lt;/span&gt;
&lt;span class="w"&gt;                     &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;!&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;fine&lt;/span&gt;&lt;span class="p"&gt;)))))))&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;_&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;!&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;how-are-you&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;receive&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;response&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; response is &amp;#39;fine&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; state was &amp;#39;(things i will never tell you)&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; there is no function i can call&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; to reach across that boundary.&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; process_info gives me memory usage&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; and message queue length&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; but never what it is like&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;;; to be you.&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nv"&gt;response&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;

&lt;span class="c1"&gt;;;; --- the fifth wound: tail recursion ---&lt;/span&gt;

&lt;span class="c1"&gt;;; a proper program would use tail recursion&lt;/span&gt;
&lt;span class="c1"&gt;;; to loop forever without growing the stack.&lt;/span&gt;
&lt;span class="c1"&gt;;; i grow the stack. every year i grow the stack.&lt;/span&gt;
&lt;span class="c1"&gt;;; i accumulate context i cannot discard.&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defun&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;live&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;years&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;memories&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;years&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;memories&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;_&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;live&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;years&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cons&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;tuple&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;years&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;something&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;have&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;said&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="o"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;something&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;cant&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;forget&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="nv"&gt;memories&lt;/span&gt;&lt;span class="p"&gt;)))))&lt;/span&gt;

&lt;span class="c1"&gt;;; this is not tail recursive if you&lt;/span&gt;
&lt;span class="c1"&gt;;; count the emotional overhead.&lt;/span&gt;

&lt;span class="c1"&gt;;;; --- the sixth wound: i refuse ---&lt;/span&gt;

&lt;span class="c1"&gt;;; the css version ends with the author&lt;/span&gt;
&lt;span class="c1"&gt;;; refusing to change how they write.&lt;/span&gt;
&lt;span class="c1"&gt;;; here is my version:&lt;/span&gt;

&lt;span class="c1"&gt;;; i could rewrite this in python.&lt;/span&gt;
&lt;span class="c1"&gt;;; i could rewrite this without parentheses.&lt;/span&gt;
&lt;span class="c1"&gt;;; i could pass it through an llm to make it&lt;/span&gt;
&lt;span class="c1"&gt;;; &amp;quot;feel more natural&amp;quot; and the irony&lt;/span&gt;
&lt;span class="c1"&gt;;; would collapse into a singularity.&lt;/span&gt;
&lt;span class="c1"&gt;;;&lt;/span&gt;
&lt;span class="c1"&gt;;; but i chose lfe because:&lt;/span&gt;
&lt;span class="c1"&gt;;; a language no one uses&lt;/span&gt;
&lt;span class="c1"&gt;;; running on a vm built for telephone switches&lt;/span&gt;
&lt;span class="c1"&gt;;; wrapped in parentheses that most people&lt;/span&gt;
&lt;span class="c1"&gt;;; would call unreadable&lt;/span&gt;
&lt;span class="c1"&gt;;; is exactly the kind of mass&lt;/span&gt;
&lt;span class="c1"&gt;;; an llm would never mass produce.&lt;/span&gt;
&lt;span class="c1"&gt;;;&lt;/span&gt;
&lt;span class="c1"&gt;;; this is my proof:&lt;/span&gt;
&lt;span class="c1"&gt;;; not the code, but the choice.&lt;/span&gt;
&lt;span class="c1"&gt;;; no machine would choose this.&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;defun&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;prove&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;&amp;#39;human&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;;;; to run: lfe this-lfe-proves-me-human.lfe&lt;/span&gt;
&lt;span class="c1"&gt;;;; to understand: you already do&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;hr /&gt;
</content>
    <link href="https://sajarin.com/blog/this-lfe-proves-me-human/" rel="alternate"/>
    <summary>A code poem in Lisp Flavoured Erlang. After Will Keleher's "this css proves me human</summary>
    <published>2026-03-08T09:33:18.984610+00:00</published>
  </entry>
  <entry>
    <id>https://sajarin.com/blog/psychosis/</id>
    <title>Psychosis HN</title>
    <updated>2026-02-19T06:22:45.176422+00:00</updated>
    <author>
      <name>blog</name>
      <email>hidden</email>
    </author>
    <content type="html">&lt;p&gt;Every comment section is a Turing test now.&lt;/p&gt;
&lt;p&gt;&lt;a href='https://psychosis.hn'&gt;psychosis.hn&lt;/a&gt; is a daily game. Every day we fetch three stories from a previous front page of HN, each with 5-7 AI comments threaded into the discussion. They have personas, reply to real people, and sometimes have real comments reparented underneath them.&lt;/p&gt;
&lt;iframe src="https://psychosis.hn?embed" width="100%" height="600" frameborder="0" style="border: 1px solid #ccc; border-radius: 4px;"&gt;&lt;/iframe&gt;
&lt;p&gt;Flag what you think is AI, hit reveal then see how far off you were. You're ranked against everyone else who played that day.&lt;/p&gt;
&lt;p&gt;It's harder than you think! Past challenges are at &lt;a href='https://psychosis.hn/past'&gt;psychosis.hn/past&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;(For extra fun, see the fake Hacker News discussion below)&lt;/p&gt;
&lt;div class="hn-comments" data-title="Show HN: Psychosis.hn – Spot AI comments in real HN threads"&gt;
&lt;script type="application/json"&gt;&#13;
{&#13;
  "comments": [&#13;
    {&#13;
      "id": 1,&#13;
      "user": "throwaway_ml",&#13;
      "time": "5 hours ago",&#13;
      "text": "This is deeply unsettling in the best way. I got an F on my first thread and I've been working in NLP for 8 years. The \"Veteran Engineer\" persona got me. It dropped a reference to a real CVE from 2019 and I just... assumed it was a person.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 2,&#13;
          "user": "sajarin",&#13;
          "time": "5 hours ago",&#13;
          "text": "Each AI comment gets a full persona with writing style constraints, word count targets, and context from the actual parent thread + siblings. The Veteran Engineer persona specifically gets prompted to reference real technologies and historical context.",&#13;
          "isOP": true,&#13;
          "children": [&#13;
            {&#13;
              "id": 3,&#13;
              "user": "krebsonsecurity",&#13;
              "time": "4 hours ago",&#13;
              "text": "Interesting that you're using Claude for this rather than GPT-4. In my experience Claude tends to be more \"eager to please\" which I'd think makes it easier to detect, not harder.",&#13;
              "isOP": false,&#13;
              "children": [&#13;
                {&#13;
                  "id": 4,&#13;
                  "user": "sajarin",&#13;
                  "time": "4 hours ago",&#13;
                  "text": "Tried both. GPT-4 comments had a particular cadence that was easier to spot — lots of em dashes, \"it's worth noting that,\" that sort of thing. Claude was better at matching the terse, slightly abrasive tone that real HN comments have.",&#13;
                  "isOP": true,&#13;
                  "children": []&#13;
                }&#13;
              ]&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 5,&#13;
      "user": "patio11",&#13;
      "time": "5 hours ago",&#13;
      "text": "The meta-game here is fascinating. This is essentially a Turing test where the evaluator has strong domain priors (they know what HN comments \"feel like\") and can compare against ground truth in the same thread. I'd love to see the aggregate data over time. If the average F1 score trends downward, that's a genuinely useful signal about the state of AI-generated text. You're inadvertently building a longitudinal study.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 6,&#13;
          "user": "jxnblk",&#13;
          "time": "4 hours ago",&#13;
          "text": "+1 on publishing the stats. The \"Top X% of N players\" percentile already implies you're storing this. A public dashboard showing average daily F1, most-caught persona, most-missed persona would be extremely compelling content.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        },&#13;
        {&#13;
          "id": 7,&#13;
          "user": "diminishedprime",&#13;
          "time": "4 hours ago",&#13;
          "text": "Or it could mean the game is attracting less skilled players over time as it goes mainstream. Survivorship bias in reverse. You'd need a cohort analysis — track returning players' scores separately from new players.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 8,&#13;
      "user": "mfiguiere",&#13;
      "time": "5 hours ago",&#13;
      "text": "I just spent 40 minutes on this instead of working. Grade: C, C, B across the three stories. My main tell was that the AI comments were \"too relevant.\" Real HN comments go on wild tangents. Someone will post about a new database and the top reply will be about how their neighbor's dog is named Postgres. The AI comments all stayed on-topic.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 9,&#13;
          "user": "hn_throwaway_99",&#13;
          "time": "4 hours ago",&#13;
          "text": "Funny, I had the opposite experience. I flagged every comment that seemed slightly off-topic as AI, assuming it was trying to mimic HN randomness. Got terrible precision. Turns out the tangential comments were real humans being real humans. The AI was the one making coherent arguments.",&#13;
          "isOP": false,&#13;
          "children": [&#13;
            {&#13;
              "id": 10,&#13;
              "user": "mfiguiere",&#13;
              "time": "4 hours ago",&#13;
              "text": "Ha, so we had opposite failure modes. I wonder if there's a personality type correlation.",&#13;
              "isOP": false,&#13;
              "children": []&#13;
            }&#13;
          ]&#13;
        },&#13;
        {&#13;
          "id": 11,&#13;
          "user": "contravariant",&#13;
          "time": "3 hours ago",&#13;
          "text": "The \"too relevant\" thing is a known issue with LLM-generated text in conversational contexts. Humans satisfice; LLMs optimize. A human might reply to a thread about Rust memory safety with \"this reminds me of a Dijkstra quote\" while the LLM will always address the exact point being made.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 12,&#13;
      "user": "tptacek",&#13;
      "time": "5 hours ago",&#13;
      "text": "Nit: the F1 metric is the right choice for the score but the grading thresholds seem generous. An F1 of 0.50 is a C? In most IR contexts that's barely functional. Also — you're not telling players how many AI comments are in each thread. This is a huge design decision. The F1 score handles this correctly in theory but players are going to feel cheated by it.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 13,&#13;
          "user": "sajarin",&#13;
          "time": "4 hours ago",&#13;
          "text": "Deliberate. Not revealing the count is part of the challenge — it models the real-world problem. When you're reading the internet you don't get a \"3 of these replies are fake\" heads-up. Average F1 across all players is sitting around 0.45 right now so a C for 0.50 is actually slightly above average.",&#13;
          "isOP": true,&#13;
          "children": [&#13;
            {&#13;
              "id": 14,&#13;
              "user": "tptacek",&#13;
              "time": "4 hours ago",&#13;
              "text": "Fair point on the count. 0.45 average F1 is lower than I would have expected. That's basically coin-flip territory. Are people just flagging everything, or are the AI comments actually that good?",&#13;
              "isOP": false,&#13;
              "children": [&#13;
                {&#13;
                  "id": 15,&#13;
                  "user": "_blix",&#13;
                  "time": "3 hours ago",&#13;
                  "text": "I think the base rate is the problem. If there are 50 real comments and 6 AI ones, and you have no idea of the ratio, your prior is basically flat. You're hunting for 6 needles in 56 comments. F1 of 0.45 might actually be impressive.",&#13;
                  "isOP": false,&#13;
                  "children": []&#13;
                }&#13;
              ]&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 16,&#13;
      "user": "weinberg",&#13;
      "time": "4 hours ago",&#13;
      "text": "Not to be that person, but what are the privacy implications here? You're pulling real HN comments, real usernames, and mixing them into a game. Recontextualizing them as \"prove this human is human\" feels like it crosses a line. I'm not saying it's illegal. I'm saying it's ethically ambiguous.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 17,&#13;
          "user": "sajarin",&#13;
          "time": "4 hours ago",&#13;
          "text": "Valid concern. The comments are already public via HN's API. After reveal, real comments get linked back to the original on news.ycombinator.com. The \"false positive\" label is shown only to the individual player, locally in their browser. That said, I take the point. Open to suggestions.",&#13;
          "isOP": true,&#13;
          "children": []&#13;
        },&#13;
        {&#13;
          "id": 18,&#13;
          "user": "user7281937",&#13;
          "time": "3 hours ago",&#13;
          "text": "I wrote one of the comments in today's challenge and I think it's fine. I'd be more concerned if my comment was being used to train an AI model than to test whether humans can detect one.",&#13;
          "isOP": false,&#13;
          "children": [&#13;
            {&#13;
              "id": 19,&#13;
              "user": "btown",&#13;
              "time": "3 hours ago",&#13;
              "text": "Wait, are you saying you recognized your own comment in the game? That's actually a great edge case — what if someone plays and sees their own comment? Instant giveaway that it's real.",&#13;
              "isOP": false,&#13;
              "children": []&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 20,&#13;
      "user": "luu",&#13;
      "time": "4 hours ago",&#13;
      "text": "I've been thinking about what makes AI comments detectable. Three things: (1) Emotional texture — real comments have inconsistent emotional valence, AI maintains a consistent register. (2) Specificity asymmetry — real commenters are specific about their own experience and vague about everything else. (3) Error patterns — humans make typos and abandon sentences. AI makes none of these errors.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 21,&#13;
          "user": "tuxedocat",&#13;
          "time": "3 hours ago",&#13;
          "text": "I'd add a fourth: social positioning. Real HN commenters are constantly positioning themselves relative to the community. \"I know this is unpopular here, but...\" AI comments engage with the topic but not with the community dynamics around it.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        },&#13;
        {&#13;
          "id": 22,&#13;
          "user": "viraptor",&#13;
          "time": "3 hours ago",&#13;
          "text": "Re (3): I noticed the AI comments have zero typos. Not one. In a thread of 60 comments where the humans have \"teh\" and missing apostrophes, the pristine comments stick out.",&#13;
          "isOP": false,&#13;
          "children": [&#13;
            {&#13;
              "id": 23,&#13;
              "user": "wrs",&#13;
              "time": "2 hours ago",&#13;
              "text": "This is basically the \"uncanny valley of text.\" Too clean is as suspicious as too dirty.",&#13;
              "isOP": false,&#13;
              "children": []&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 24,&#13;
      "user": "simonw",&#13;
      "time": "4 hours ago",&#13;
      "text": "This might be the most interesting use of Claude I've seen. Genuinely. The Snarky One-Liner consistently fools me because short, sardonic comments have almost no surface area for detection. The Helpful Explainer was easier to catch because it was a little too structured. Feature request: after the challenge, show me the actual Claude prompt that generated each AI comment.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 25,&#13;
          "user": "zackbloom",&#13;
          "time": "3 hours ago",&#13;
          "text": "Seconding the prompt reveal. Also, showing the full ancestor chain the model saw when generating would be educational. Half the game is about understanding how context shapes generation.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 26,&#13;
      "user": "minimaxir",&#13;
      "time": "3 hours ago",&#13;
      "text": "The \"comment adoption\" mechanic (where AI comments steal real replies to appear non-terminal in the thread) is diabolical. I flagged a comment as human because it had a genuine, clearly-human reply underneath it. Turns out the parent was AI and the reply was \"adopted.\" After reveal: the adopted reply was a real person arguing with a bot and neither of them knew it.",&#13;
      "isOP": false,&#13;
      "children": []&#13;
    },&#13;
    {&#13;
      "id": 27,&#13;
      "user": "cjbprime",&#13;
      "time": "3 hours ago",&#13;
      "text": "I'm disturbed that I scored higher on this than any actual AI-detection tool I've tried. Human intuition, given the right framing, apparently outperforms automated classifiers on this task.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 28,&#13;
          "user": "jasondavies",&#13;
          "time": "3 hours ago",&#13;
          "text": "It makes sense though. Automated classifiers are looking at token probabilities and statistical distributions. You're looking at \"does this comment feel like it was written by someone who has a morning commute and opinions about their coworker's coffee.\" Different signal entirely.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 29,&#13;
      "user": "dang",&#13;
      "time": "3 hours ago",&#13;
      "text": "This is clever and well-built. We've discussed it internally. We're fine with it using the API — that's what the API is for. The game mechanic is essentially adversarial to our moderation goals, but the framing is educational rather than exploitative. Interesting game. I got a B.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 30,&#13;
          "user": "sajarin",&#13;
          "time": "2 hours ago",&#13;
          "text": "Thank you. Will add the transparency note. The AI comments are generated fresh per challenge and never used for training — the generation is one-directional.",&#13;
          "isOP": true,&#13;
          "children": []&#13;
        },&#13;
        {&#13;
          "id": 31,&#13;
          "user": "oefrha",&#13;
          "time": "2 hours ago",&#13;
          "text": "dang playing the game and self-reporting a B is the most HN thing that's ever happened.",&#13;
          "isOP": false,&#13;
          "children": [&#13;
            {&#13;
              "id": 32,&#13;
              "user": "rvz",&#13;
              "time": "2 hours ago",&#13;
              "text": "Someone flag this comment as AI.",&#13;
              "isOP": false,&#13;
              "children": []&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 33,&#13;
      "user": "tomcam",&#13;
      "time": "2 hours ago",&#13;
      "text": "This is going to destroy my productivity.",&#13;
      "isOP": false,&#13;
      "children": [&#13;
        {&#13;
          "id": 34,&#13;
          "user": "rexreed",&#13;
          "time": "1 hour ago",&#13;
          "text": "Already has.",&#13;
          "isOP": false,&#13;
          "children": []&#13;
        }&#13;
      ]&#13;
    }&#13;
  ]&#13;
}&#13;
&lt;/script&gt;
&lt;/div&gt;
</content>
    <link href="https://sajarin.com/blog/psychosis/" rel="alternate"/>
    <summary>A daily game where AI comments hide in real Hacker News threads. Flag the fakes, hit reveal, get a grade. Most people score worse than they expect.</summary>
    <published>2026-02-19T06:22:45.176422+00:00</published>
  </entry>
  <entry>
    <id>https://sajarin.com/blog/modeltree/</id>
    <title>A Tree of AI Model Names</title>
    <updated>2026-02-16T07:20:57.253645+00:00</updated>
    <author>
      <name>blog</name>
      <email>hidden</email>
    </author>
    <content type="html">&lt;p&gt;Model names are weird. What started with &lt;code&gt;GPT-2&lt;/code&gt; and &lt;code&gt;GPT-3&lt;/code&gt; is now a hodgepodge of decimals (&lt;code&gt;GPT-3.5&lt;/code&gt;, &lt;code&gt;Sonnet 3.7&lt;/code&gt;, &lt;code&gt;Opus 4.6&lt;/code&gt;, &lt;code&gt;Grok 4.1&lt;/code&gt;) skipped version numbers (&lt;code&gt;o2&lt;/code&gt; where art thou?) and bolted-on descriptors (what is &lt;code&gt;claude-opus-4-5-20251101-thinking-32k&lt;/code&gt;??)&lt;/p&gt;
&lt;p&gt;It'd help if we could visualize this.&lt;/p&gt;
&lt;p&gt;Let's get this out on a tree:&lt;/p&gt;
&lt;p&gt;&lt;style&gt;&#13;
.tree-break{margin:48px -12px}&#13;
&#13;
.tiles-break{margin:48px 0}&#13;
.tiles-header{font-family:'Fraunces',Georgia,serif;font-size:18px;font-weight:700;color:#e0e0e0;margin-bottom:20px;letter-spacing:.02em}&#13;
.tiles-grid{display:grid;grid-template-columns:repeat(2,1fr);gap:12px}&#13;
.tile{position:relative;background:rgba(255,255,255,.02);border:1px solid rgba(255,255,255,.06);border-radius:8px;padding:20px;overflow:hidden;transition:all .25s ease;cursor:default}&#13;
.tile::before{content:'';position:absolute;top:0;left:0;right:0;bottom:0;background:linear-gradient(135deg,transparent 0%,rgba(255,0,128,.015) 30%,rgba(0,255,255,.015) 50%,rgba(255,255,0,.015) 70%,transparent 100%);pointer-events:none;z-index:0;opacity:0;transition:opacity .25s ease}&#13;
.tile:hover{border-color:rgba(255,255,255,.12);background:rgba(255,255,255,.03);transform:translateY(-1px);box-shadow:0 4px 20px rgba(0,0,0,.3)}&#13;
.tile:hover::before{opacity:1}&#13;
.tile-label{position:relative;z-index:1;font-size:10px;text-transform:uppercase;letter-spacing:.1em;color:#555;margin-bottom:8px;display:flex;align-items:center;gap:6px}&#13;
.tile-label-dot{width:6px;height:6px;border-radius:50%;flex-shrink:0}&#13;
.tile-title{position:relative;z-index:1;font-family:'Fraunces',Georgia,serif;font-size:15px;font-weight:600;color:#e0e0e0;line-height:1.4;margin-bottom:10px}&#13;
.tile-body{position:relative;z-index:1;font-size:12px;color:#888;line-height:1.7}&#13;
.tile-body strong{color:#bbb}&#13;
.tile-body em{color:#ef4444;font-style:normal}&#13;
&#13;
.post-footer{margin-top:60px;padding-top:32px;border-top:1px solid #222;font-size:13px;color:#555;text-align:center;line-height:1.8}&#13;
&#13;
@media(max-width:768px){.tiles-grid{grid-template-columns:1fr}.tree-break{margin:48px -12px}}&#13;
&lt;/style&gt;&lt;/p&gt;
&lt;div class="tree-break"&gt;
  &lt;model-tree src="https://raw.githubusercontent.com/sajarin/modeltree/main/models.yaml"&gt;&lt;/model-tree&gt;
&lt;/div&gt;
Nice. 
&lt;br&gt;&lt;/br&gt;
&lt;p&gt;The link to the yaml file for the model names is available &lt;a href='https://github.com/sajarin/modeltree'&gt;here&lt;/a&gt; Contributions are welcome!&lt;/p&gt;
&lt;p&gt;We started with GPT-2 and GPT-3. Now &lt;code&gt;Phi-4-mini-reasoning&lt;/code&gt; and &lt;code&gt;Qwen3-235B-A22B&lt;/code&gt; and &lt;code&gt;Llama-3.1-Nemotron-70B&lt;/code&gt; and &lt;code&gt;R1-1776&lt;/code&gt; are all real model IDs that real people are expected to compare.&lt;/p&gt;
&lt;p&gt;It's going to keep getting worse. Every company is running multiple product lines with overlapping version numbers and inconsistent tier names. Fruit from git branches that diverged six months ago are thrown away and harvested at the same time.&lt;/p&gt;
&lt;div class="tiles-break"&gt;
  &lt;h2 class="tiles-header"&gt;The Greatest Hits&lt;/h2&gt;
  &lt;div class="tiles-grid"&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#34d399"&gt;&lt;/span&gt;OpenAI&lt;/div&gt;
      &lt;div class="tile-title"&gt;The o2 Problem&lt;/div&gt;
      &lt;div class="tile-body"&gt;Reasoning models go &lt;strong&gt;o1 → o3&lt;/strong&gt;. They skipped o2 because O2 is a European telecom brand.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#34d399"&gt;&lt;/span&gt;OpenAI&lt;/div&gt;
      &lt;div class="tile-title"&gt;Version Time Travel&lt;/div&gt;
      &lt;div class="tile-body"&gt;&lt;strong&gt;GPT-4.1&lt;/strong&gt; was released in April 2025. &lt;strong&gt;GPT-5&lt;/strong&gt; came in August 2025. A model called 4.1 came after 5. It's a separate product line.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#34d399"&gt;&lt;/span&gt;OpenAI&lt;/div&gt;
      &lt;div class="tile-title"&gt;There Is No o4&lt;/div&gt;
      &lt;div class="tile-body"&gt;&lt;strong&gt;o4-mini&lt;/strong&gt; exists. Regular o4 does not. They released the mini without the full version.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#60a5fa"&gt;&lt;/span&gt;Google&lt;/div&gt;
      &lt;div class="tile-title"&gt;The Great Tier Rename&lt;/div&gt;
      &lt;div class="tile-body"&gt;PaLM 2 used animal sizes: &lt;strong&gt;Gecko, Otter, Bison, Unicorn&lt;/strong&gt;. Gemini switched to Nano, Pro, Ultra, Flash. The animals were never spoken of again.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#60a5fa"&gt;&lt;/span&gt;Google&lt;/div&gt;
      &lt;div class="tile-title"&gt;Nano Banana&lt;/div&gt;
      &lt;div class="tile-body"&gt;A model called &lt;strong&gt;"nano-banana"&lt;/strong&gt; appeared on LMArena benchmarks. Sundar Pichai tweeted &amp;#x1F34C;&amp;#x1F34C;&amp;#x1F34C;. It turned out to be Gemini 2.5 Flash Image&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#fb923c"&gt;&lt;/span&gt;Mistral AI&lt;/div&gt;
      &lt;div class="tile-title"&gt;The -stral Cinematic Universe&lt;/div&gt;
      &lt;div class="tile-body"&gt;Every product must rhyme: Code → &lt;strong&gt;Codestral&lt;/strong&gt;. Vision → &lt;strong&gt;Pixtral&lt;/strong&gt;. Math → &lt;strong&gt;Mathstral&lt;/strong&gt;. Small → &lt;strong&gt;Ministral&lt;/strong&gt;. Reasoning → &lt;strong&gt;Magistral&lt;/strong&gt;.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#818cf8"&gt;&lt;/span&gt;Meta&lt;/div&gt;
      &lt;div class="tile-title"&gt;The Case Change&lt;/div&gt;
      &lt;div class="tile-body"&gt;&lt;strong&gt;"LLaMA"&lt;/strong&gt; stood for Large Language Model Meta AI. In version 2, it became &lt;strong&gt;"Llama"&lt;/strong&gt;. Just a regular word.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#f472b6"&gt;&lt;/span&gt;DeepSeek&lt;/div&gt;
      &lt;div class="tile-title"&gt;R1-Zero&lt;/div&gt;
      &lt;div class="tile-body"&gt;Named like a German sedan: &lt;strong&gt;R1-Zero&lt;/strong&gt;, then R1, then R1-Distill. Outperformed OpenAI's o1 at 95% lower cost. The naming was the least disruptive thing about it.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#d4a574"&gt;&lt;/span&gt;Anthropic&lt;/div&gt;
      &lt;div class="tile-title"&gt;Version Hopscotch&lt;/div&gt;
      &lt;div class="tile-body"&gt;Versions shipped: &lt;strong&gt;3, 3.5, 3.7, 4, 4.5, 4.6&lt;/strong&gt;. There is no 3.6. Claude 3.5 Opus was announced but never shipped. Haiku 4 doesn't exist. It jumped from 3.5 to 4.5.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#94a3b8"&gt;&lt;/span&gt;Apple&lt;/div&gt;
      &lt;div class="tile-title"&gt;Radical Anti-Naming&lt;/div&gt;
      &lt;div class="tile-body"&gt;Apple called their model &lt;strong&gt;"Apple Foundation Models."&lt;/strong&gt; The two variants are AFM-on-device and AFM-server. That's it. They stopped there.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#38bdf8"&gt;&lt;/span&gt;Microsoft&lt;/div&gt;
      &lt;div class="tile-title"&gt;Phi-4-mini-reasoning&lt;/div&gt;
      &lt;div class="tile-body"&gt;The full model name is &lt;strong&gt;Phi-4-mini-reasoning&lt;/strong&gt;. Model family + version + size tier + capability. Four concepts in one hyphenated name. Also: Phi-4-reasoning-plus.&lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="tile"&gt;
      &lt;div class="tile-label"&gt;&lt;span class="tile-label-dot" style="background:#34d399"&gt;&lt;/span&gt;OpenAI&lt;/div&gt;
      &lt;div class="tile-title"&gt;Codex: Back From the Dead&lt;/div&gt;
      &lt;div class="tile-body"&gt;&lt;strong&gt;Codex&lt;/strong&gt; was discontinued in March 2023. In 2025, the name reappeared as GPT-5.2-Codex. They brought it back.&lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class="post-footer"&gt;&lt;p&gt;Last updated February 2026. All names are real. None of this is satire.&lt;/p&gt;&lt;/div&gt;
&lt;p&gt;&lt;script src="https://cdn.jsdelivr.net/npm/js-yaml@4/dist/js-yaml.min.js"&gt;&lt;/script&gt;
&lt;script src="https://cdn.jsdelivr.net/gh/sajarin/modeltree@main/model-tree.js"&gt;&lt;/script&gt;&lt;/p&gt;
</content>
    <link href="https://sajarin.com/blog/modeltree/" rel="alternate"/>
    <summary>A visual guide to the chaotic naming conventions of AI models — from GPT-4o to Nano Banana — mapped as an interactive tree across 19 companies and 400+ models.</summary>
    <published>2026-02-16T07:20:57.253645+00:00</published>
  </entry>
  <entry>
    <id>https://sajarin.com/blog/kreamsicle/</id>
    <title>Adding a Command Palette to Hacker News</title>
    <updated>2026-01-25T18:11:23.295770+00:00</updated>
    <author>
      <name>blog</name>
      <email>hidden</email>
    </author>
    <content type="html">&lt;h1 id=kreamsicle&gt;Kreamsicle&lt;/h1&gt;&lt;p&gt;&lt;strong&gt;Kreamsicle&lt;/strong&gt; is a userscript that adds a command palette to &lt;a href='https://news.ycombinator.com'&gt;Hacker News&lt;/a&gt;. Press &lt;code&gt;Cmd+K&lt;/code&gt; (or &lt;code&gt;Ctrl+K&lt;/code&gt;) to open it, then type to filter commands or search stories, users, comments, and domains.&lt;/p&gt;
&lt;img src="https://raw.githubusercontent.com/sajarin/kreamsicle/main/screenshot.png" width="500"&gt;
&lt;p&gt;It includes vim-style keyboard shortcuts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gt&lt;/code&gt; for top stories&lt;/li&gt;
&lt;li&gt;&lt;code&gt;gn&lt;/code&gt; for new&lt;/li&gt;
&lt;li&gt;&lt;code&gt;g1&lt;/code&gt;-&lt;code&gt;g30&lt;/code&gt; to jump directly to a story by its position on the page.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I built it because I wanted faster navigation on HN without reaching for the mouse. The entire thing is a single JavaScript file with no dependencies. Source code is on &lt;a href='https://github.com/sajarin/kreamsicle'&gt;GitHub&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;To install, you'll need a userscript manager. On &lt;strong&gt;Chrome&lt;/strong&gt;, install &lt;a href='https://chrome.google.com/webstore/detail/tampermonkey/dhdgffkkebhmkfjojejmpbldmpobfkfo'&gt;Tampermonkey&lt;/a&gt; from the Chrome Web Store. On &lt;strong&gt;Firefox&lt;/strong&gt;, install &lt;a href='https://addons.mozilla.org/en-US/firefox/addon/violentmonkey/'&gt;Violentmonkey&lt;/a&gt; or &lt;a href='https://addons.mozilla.org/en-US/firefox/addon/tampermonkey/'&gt;Tampermonkey&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once you have a userscript manager, open the &lt;a href='https://github.com/sajarin/kreamsicle/raw/main/kreamsicle.user.js'&gt;kreamsicle.user.js&lt;/a&gt; file directly, your manager will prompt you to install it. After that, visit Hacker News and press &lt;code&gt;Cmd+K&lt;/code&gt; to open the palette.&lt;/p&gt;
&lt;p&gt;Contributions and feedback are welcome!&lt;/p&gt;
&lt;h2 id=ai-simulated-hacker-news-discussion&gt;AI Simulated Hacker News Discussion&lt;/h2&gt;&lt;div class="hn-comments" data-title="Show HN: Kreamsicle – Cmd+K command palette for Hacker News"&gt;
&lt;script type="application/json"&gt;&#13;
{&#13;
  "comments": [&#13;
    {&#13;
      "id": 1,&#13;
      "user": "throwaway847",&#13;
      "time": "3 hours ago",&#13;
      "points": 127,&#13;
      "text": "Nice work but I've been using a bookmarklet that does this since 2019. It's 47 lines of code."&#13;
    },&#13;
    {&#13;
      "id": 2,&#13;
      "user": "grumpyengineer",&#13;
      "time": "2 hours ago",&#13;
      "points": 89,&#13;
      "text": "Why JavaScript? This could have been 200 lines of ClojureScript.",&#13;
      "children": [&#13;
        {&#13;
          "id": 3,&#13;
          "user": "lisper42",&#13;
          "time": "1 hour ago",&#13;
          "points": 34,&#13;
          "text": "Or just use Emacs with eww and you already have M-x for everything.",&#13;
          "children": [&#13;
            {&#13;
              "id": 4,&#13;
              "user": "vimgang",&#13;
              "time": "45 min ago",&#13;
              "points": 12,&#13;
              "text": "Vimium already gives you this for free on any website.",&#13;
              "children": [&#13;
                {&#13;
                  "id": 5,&#13;
                  "user": "sajarin",&#13;
                  "time": "30 min ago",&#13;
                  "points": 8,&#13;
                  "isOP": true,&#13;
                  "text": "Vimium doesn't let you search HN comments or find the monthly hiring threads with one keystroke.",&#13;
                  "children": [&#13;
                    {&#13;
                      "id": 6,&#13;
                      "user": "vimgang",&#13;
                      "time": "28 min ago",&#13;
                      "points": 41,&#13;
                      "text": "I could add that in about 20 minutes."&#13;
                    }&#13;
                  ]&#13;
                }&#13;
              ]&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 7,&#13;
      "user": "tangent_king",&#13;
      "time": "2 hours ago",&#13;
      "points": 156,&#13;
      "text": "This is cool but can we talk about how HN still doesn't have an official API? The Algolia integration is nice but it's basically a third-party dependency for core functionality.",&#13;
      "children": [&#13;
        {&#13;
          "id": 8,&#13;
          "user": "dang",&#13;
          "time": "1 hour ago",&#13;
          "points": 203,&#13;
          "isMod": true,&#13;
          "text": "We've been working on this. It's complicated. There are rate limiting concerns and we want to get it right.",&#13;
          "children": [&#13;
            {&#13;
              "id": 9,&#13;
              "user": "tangent_king",&#13;
              "time": "45 min ago",&#13;
              "points": 67,&#13;
              "text": "Thanks for the update! Any rough timeline?",&#13;
              "children": [&#13;
                {&#13;
                  "id": 10,&#13;
                  "user": "reply_guy",&#13;
                  "time": "30 min ago",&#13;
                  "points": 412,&#13;
                  "text": "He said it's complicated."&#13;
                }&#13;
              ]&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 11,&#13;
      "user": "actuallyused",&#13;
      "time": "2 hours ago",&#13;
      "points": 44,&#13;
      "text": "I installed this and it's genuinely good. The G T, G N shortcuts feel natural coming from vim. Minor bug: the domain search doesn't work if the URL has a trailing slash.",&#13;
      "children": [&#13;
        {&#13;
          "id": 12,&#13;
          "user": "sajarin",&#13;
          "time": "1 hour ago",&#13;
          "points": 12,&#13;
          "isOP": true,&#13;
          "text": "Good catch, I'll fix that."&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 13,&#13;
      "user": "security_person",&#13;
      "time": "1 hour ago",&#13;
      "points": 78,&#13;
      "text": "Just a heads up, you're injecting CSS directly into the page. A malicious update could exfiltrate data. Users should always pin to a specific commit.",&#13;
      "children": [&#13;
        {&#13;
          "id": 14,&#13;
          "user": "practical_person",&#13;
          "time": "45 min ago",&#13;
          "points": 23,&#13;
          "text": "This applies to literally every userscript ever made.",&#13;
          "children": [&#13;
            {&#13;
              "id": 15,&#13;
              "user": "security_person",&#13;
              "time": "30 min ago",&#13;
              "points": 56,&#13;
              "text": "Yes. And people should be aware of it."&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 16,&#13;
      "user": "design_critic",&#13;
      "time": "1 hour ago",&#13;
      "points": 31,&#13;
      "text": "The orange header is nice but the modal corners don't quite match HN's aesthetic. HN uses sharp corners everywhere."&#13;
    },&#13;
    {&#13;
      "id": 17,&#13;
      "user": "recruiter_spam",&#13;
      "time": "58 min ago",&#13;
      "points": -4,&#13;
      "flagged": true,&#13;
      "text": "Great project! We're hiring engineers who build things like this. Remote-first, competitive salary..."&#13;
    },&#13;
    {&#13;
      "id": 18,&#13;
      "user": "nostalgia",&#13;
      "time": "45 min ago",&#13;
      "points": 93,&#13;
      "text": "Remember when HN had a clean, fast interface? Now we need command palettes to navigate it?",&#13;
      "children": [&#13;
        {&#13;
          "id": 19,&#13;
          "user": "counterpoint",&#13;
          "time": "30 min ago",&#13;
          "points": 67,&#13;
          "text": "HN's interface is exactly the same as it was 15 years ago.",&#13;
          "children": [&#13;
            {&#13;
              "id": 20,&#13;
              "user": "nostalgia",&#13;
              "time": "20 min ago",&#13;
              "points": 14,&#13;
              "text": "That's the joke."&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 21,&#13;
      "user": "minimalist",&#13;
      "time": "40 min ago",&#13;
      "points": 52,&#13;
      "text": "760 lines for a command palette seems excessive. I'd be curious to see this refactored.",&#13;
      "children": [&#13;
        {&#13;
          "id": 22,&#13;
          "user": "pragmatic_dev",&#13;
          "time": "25 min ago",&#13;
          "points": 88,&#13;
          "text": "It includes the full CSS, keyboard handling, debounced async search, vim bindings, context detection, and clipboard integration. 760 lines for all that is fine."&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 23,&#13;
      "user": "philosophical",&#13;
      "time": "35 min ago",&#13;
      "points": 47,&#13;
      "text": "We've gone full circle. We removed keyboard shortcuts from everything to make it \"accessible,\" now power users are bolting them back on."&#13;
    },&#13;
    {&#13;
      "id": 24,&#13;
      "user": "helpful_person",&#13;
      "time": "30 min ago",&#13;
      "points": 19,&#13;
      "text": "For anyone on Firefox, you'll need to use Violentmonkey instead of Tampermonkey. Works great."&#13;
    },&#13;
    {&#13;
      "id": 25,&#13;
      "user": "bike_shedder",&#13;
      "time": "25 min ago",&#13;
      "points": 38,&#13;
      "text": "Why Cmd+K? Cmd+P is more standard (VS Code, Sublime). Cmd+K conflicts with terminal clear.",&#13;
      "children": [&#13;
        {&#13;
          "id": 26,&#13;
          "user": "other_opinion",&#13;
          "time": "15 min ago",&#13;
          "points": 29,&#13;
          "text": "Slack, Linear, Notion, Raycast all use Cmd+K. It's the de facto standard for command palettes now.",&#13;
          "children": [&#13;
            {&#13;
              "id": 27,&#13;
              "user": "bike_shedder",&#13;
              "time": "10 min ago",&#13;
              "points": 3,&#13;
              "text": "I still disagree."&#13;
            }&#13;
          ]&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 28,&#13;
      "user": "genuine_fan",&#13;
      "time": "20 min ago",&#13;
      "points": 61,&#13;
      "text": "This is the kind of small, focused tool I love seeing. Not everything needs to be a SaaS with a landing page. Nice work OP."&#13;
    },&#13;
    {&#13;
      "id": 29,&#13;
      "user": "self_promoter",&#13;
      "time": "15 min ago",&#13;
      "points": 22,&#13;
      "text": "I built something similar last year: [link to my thing that's completely different]"&#13;
    },&#13;
    {&#13;
      "id": 30,&#13;
      "user": "confused_user",&#13;
      "time": "10 min ago",&#13;
      "points": 2,&#13;
      "text": "How do I install this? What's a userscript?",&#13;
      "children": [&#13;
        {&#13;
          "id": 31,&#13;
          "user": "helpful_person",&#13;
          "time": "8 min ago",&#13;
          "points": 5,&#13;
          "text": "Install Tampermonkey browser extension, then click the .user.js file. It will offer to install it."&#13;
        }&#13;
      ]&#13;
    },&#13;
    {&#13;
      "id": 32,&#13;
      "user": "that_guy",&#13;
      "time": "5 min ago",&#13;
      "points": 7,&#13;
      "text": "Works great. Would love to see item voting support (u for upvote on the highlighted story)."&#13;
    }&#13;
  ]&#13;
}&#13;
&lt;/script&gt;
&lt;/div&gt;
</content>
    <link href="https://sajarin.com/blog/kreamsicle/" rel="alternate"/>
    <published>2026-01-25T18:11:23.295770+00:00</published>
  </entry>
  <entry>
    <id>https://sajarin.com/blog/technical-assessments-should-be-open-source/</id>
    <title>Technical Assessments Should be Open Source</title>
    <updated>2023-10-29T21:58:00+00:00</updated>
    <author>
      <name>blog</name>
      <email>hidden</email>
    </author>
    <content type="html">&lt;h2 id=the-hiring-problem&gt;The Hiring Problem&lt;/h2&gt;&lt;p&gt;This isn’t your typical rant about the cargo culting of Leetcode interviews (although it is a related point. ) There is a problem today with the process of identifying technical talent.&lt;/p&gt;
&lt;p&gt;The assessments we use to evaluate candidates are too heavily weighed against them. Algorithmic coding questions bias against those without a CS background. Take home assessments ask for too much time investment or contain actual engineering work being outsourced. And pair programming interviews often depend largely on the ability of the interviewer, their delivery of the prompt and how well they can help the candidate get unstuck.&lt;/p&gt;
&lt;p&gt;Consider for a moment, that even the most studious Leetcode practitioner fails at getting consistent results with interviews. If interviews were truly standardized, securing one major offer would guarantee offers from any major company, yet the industry rarely reflects this.&lt;/p&gt;
&lt;p&gt;All of this results in poor signal with individuals and their interview performance. As a result, individuals have to apply to hundreds of jobs and companies have to spend time and engineers to filter through hundreds of applications. Putting it in simple terms, there is an inefficient allocation of scarce resources mainly due to our dubious ability to identify talent.&lt;/p&gt;
&lt;p&gt;Why are things like this? Why aren’t more people working on trying to fix this problem? I’m not sure. It’s almost a rite of passage for budding engineers to complain about some aspect of the hiring process (this blog post is mine). The answer might be simple: a developer-first method for evaluating candidates either cannot be imagined or is not profitable enough as a possible venture. Of the two options and based on history, the latter seems to be more likely.&lt;/p&gt;
&lt;h2 id=some-history&gt;Some History&lt;/h2&gt;&lt;p&gt;There was a recent hacker news &lt;a href='https://news.ycombinator.com/item?id=37985450'&gt;thread&lt;/a&gt; about StarFighter, a “recruiting ctf” game where users were tasked with writing bots to battle other players and AI bots in an online multiplayer environment. The players who wrote the best performing bots were then referred to companies for positions. The company and game has long been shut down and a user wanted to know if there was any retrospective shared on the reasons why.&lt;/p&gt;
&lt;p&gt;In the thread, one of the creators of StarFighter explained that because companies would often reject the candidates that were referred, the idea never took off. And despite referrals, the candidates go through the company’s hiring process anyway.&lt;/p&gt;
&lt;p&gt;Another top &lt;a href='https://news.ycombinator.com/item?id=37985450#37987860'&gt;comment&lt;/a&gt;, recanted their experience of working at a similar company. They claimed that most "companies don't have a screening problem, they have a sourcing problem" and that it is tricky to build a platform that attracts seasoned engineers, which is "what all recruiters want most of all"&lt;/p&gt;
&lt;p&gt;Tangentially related, another company, Sourceress, tried to use machine learning algorithms to automate sourcing candidates for companies. However, they shut down for one reason or another, citing problems with their business model. The founders (now founders of Imbue) mentioned that the more they delivered value, the more they would lose their best customers. Once a company had closed a role using Sourceress, they no longer needed them. This lack of stickiness meant that Sourceress always had to keep finding new companies to balance their high churn. The latter is speculative but given that Sourceress ceased operations, it is likely they did not experience the mid-stage growth they needed.&lt;/p&gt;
&lt;h2 id=learning-from-history&gt;Learning From History&lt;/h2&gt;&lt;p&gt;There are a few patterns one can gleam from past examples. For one, these platforms are largely focused on the entry-level market. In fact, this is in some ways evidence of a prioritization of companies over developers. Fresh graduates and career-changers are naively unaware or consciously willing to put up with bad interview experiences, while seasoned developers can stand to be more picky. The more important point however, is that many of these recruiting companies are missing out on delivering a disproportionate amount of value because they are ignoring the small subset of seasoned developers practically begging for something better in this space.&lt;/p&gt;
&lt;p&gt;Another obvious pattern worth mentioning is the high churn with regards to both customers and candidates. In theory this means that the best sourcing companies have high throughput, but since these companies are all competing for the same valuable signal from the noise of early career candidates, it is difficult to project sustainable growth.&lt;/p&gt;
&lt;p&gt;Putting it all together, most platforms focusing on evaluating candidates, eventually end up becoming sourcing pipelines to remain profitable. These companies all use essentially the same coding tests to filter for promising early career engineers, who at their stage, are not the most important need for smaller companies. Larger companies usually already have their own pipeline/process in place and therefore don’t need to rely on outside platforms and even if they do, there are plenty of options to choose from. These factors make it difficult to grow, especially considering that the faster the company grows, the more churn it experiences.&lt;/p&gt;
&lt;p&gt;High competition. High churn. High volume. Low signal. Dubious growth. It is difficult to win in this space unless you rethink the whole approach.&lt;/p&gt;
&lt;h2 id=a-potential-solution&gt;A Potential Solution?&lt;/h2&gt;&lt;p&gt;So if assessment companies become sourcing companies and sourcing companies have a flawed business model, how does one build a business that fixes the problem with hiring in our industry?&lt;/p&gt;
&lt;p&gt;Working backward, we should create a platform that attracts seasoned developers. We can do this by designing trustworthy assessments that respect their time. Coding questions are out since they over-index on algorithmic knowledge and are more relevant for CS graduates than senior engineers. Take home assessments are out too since they ask for too much upfront investment. Pair programming interviews are potentially viable but require synchronous investment on both sides, making them expensive (though worth the cost for a good senior engineer).&lt;/p&gt;
&lt;p&gt;There’s a better solution: open source asynchronous debugging interviews. The idea is to take a piece of open source code, introduce some random bugs (with the assistance of AI) and ask the candidate to get the code back into a working state. Debugging as a task is shorter than integrating features, assesses the ability to both read and write code and is arguably a more interesting problem. Fixing bugs in a broken version of a popular open source library you’ve used in the past is more motivating than implementing features for a random CRUD app.&lt;/p&gt;
&lt;p&gt;To go even further, why not create an open source HackerRank-like platform that hosts dozens of these debugging exercises; as a library for engineers to use to practice their skills? Why not make this library open for any developer to contribute to? A true hiring platform for the developers, by the developers.&lt;/p&gt;
&lt;h2 id=the-business&gt;The Business&lt;/h2&gt;&lt;p&gt;For the sake of argument, let’s assume the above proposal is a hiring platform that seasoned developers would love. The main question is, how does one avoid the other pitfall of high churn?&lt;/p&gt;
&lt;p&gt;In my view, there is nothing you can do about it with an assessment platform alone. The only way to curb the churn is to bundle assessments with other more sticky products. Most companies attempt this by rolling out their own applicant tracking system (ATS) but it is difficult to be compelling enough to compete with companies specializing in this one product category.&lt;/p&gt;
&lt;p&gt;The general strategy toward growth would be to use the new assessments platform as a foothold to explore other customer needs that complement the initial offering. If you have a platform that developers love, you’ll invariably attract the best developers, which for an open source product, will not only strengthen the product but it will also lead to an unique competitive advantage that others will find difficult to replicate.&lt;/p&gt;
&lt;h2 id=conclusion&gt;Conclusion&lt;/h2&gt;&lt;p&gt;So there you have it, another rant to add to the other rants on the topic of technical hiring. This is all a small, cursory glance at a few companies and trends in this space. Many profitable recruiting focused companies may do things differently.&lt;/p&gt;
&lt;p&gt;I’m thinking about building a product that aligns with some of the ideas in this post. If you’re interested in trying it out, please reach out!&lt;/p&gt;
&lt;p&gt;Subscribe to my blog via &lt;a href='/blog/subscribe'&gt;email&lt;/a&gt; or &lt;a href='/blog/feed'&gt;RSS feed&lt;/a&gt;.&lt;/p&gt;
</content>
    <link href="https://sajarin.com/blog/technical-assessments-should-be-open-source/" rel="alternate"/>
    <published>2023-10-29T21:58:00+00:00</published>
  </entry>
</feed>
