<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Home on Dave Zvenyach's website</title><link>https://vdavez.com/</link><description>A website created by V. David Zvenyach on Dave Zvenyach's website</description><generator>Hugo 0.147.9 -- gohugo.io</generator><language>en</language><atom:link href="https://vdavez.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Changing the default search language in RDS</title><link>https://vdavez.com/2024/02/changing-the-default-search-language-in-rds/</link><pubDate>Sat, 03 Feb 2024 00:00:00 -0600</pubDate><guid>https://vdavez.com/2024/02/changing-the-default-search-language-in-rds/</guid><description>&lt;p>&lt;a href="https://www.postgresql.org/docs/current/textsearch.html">Full text search in PostgreSQL&lt;/a> is pretty great!&lt;/p>
&lt;p>Recently, though, I found a bug in my application that perplexed me. When I searched for the word &amp;ldquo;Oddball&amp;rdquo; in the app, there were no results even though there should have been an entry with Oddball in it. Making matters worse, when I ran the search on my computer in dev, it worked!&lt;/p>
&lt;p>After an inordinate amount of time being confused, I discovered that the root cause of the problem was that the default language in PostgreSQL differed from my dev computer and the prod environment. In dev, the language was set to &lt;code>pg_catalog.english&lt;/code> but in prod (running in AWS RDS), the language defaults to &lt;code>pg_catalog.simple&lt;/code>.&lt;/p></description><content:encoded>&lt;![CDATA[<p><a href="https://www.postgresql.org/docs/current/textsearch.html">Full text search in PostgreSQL</a> is pretty great!</p><p>Recently, though, I found a bug in my application that perplexed me. When I searched for the word &ldquo;Oddball&rdquo; in the app, there were no results even though there should have been an entry with Oddball in it. Making matters worse, when I ran the search on my computer in dev, it worked!</p><p>After an inordinate amount of time being confused, I discovered that the root cause of the problem was that the default language in PostgreSQL differed from my dev computer and the prod environment. In dev, the language was set to<code>pg_catalog.english</code> but in prod (running in AWS RDS), the language defaults to<code>pg_catalog.simple</code>.</p><p>As a result, the &ldquo;lexemes&rdquo; that were stored in the database were different from the search query. For example, when I ran<code>SELECT * FROM ts_debug('english', 'oddball');</code> the lexeme was<code>{oddbal}</code> (note: only one<code>l</code>). But when I ran<code>SELECT * FROM ts_debug('simple','oddball');</code>, the lexeme can back with<code>{oddball}</code> (two<code>l</code>s). That caused the bug.</p><p>But the<em>solution</em> required a bit more work because RDS doesn&rsquo;t let you adjust the settings as easily as you might locally. Ultimately, to solve it, I needed to adjust the<code>default_text_search_config</code> parameter. If, however, you simply ran<code>SET default_text_search_config = 'pg_catalog.english;</code> it wouldn&rsquo;t save it. As soon as you exited the session, the config would revert. Most publicly posted solutions emphasized adjusting the PostgreSQL configuration files, but because it&rsquo;s RDS, you can&rsquo;t really do that.</p><p>In the end, the solution required just one more step: granting the<em>user</em> the ability to adjust the public schema and then altering the user&rsquo;s<em>role</em> to set the default language.</p><p>Here&rsquo;s the code:</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sql" data-lang="sql"><span class="line"><span class="cl"><span class="k">GRANT</span><span class="w"/><span class="k">USAGE</span><span class="w"/><span class="k">ON</span><span class="w"/><span class="k">SCHEMA</span><span class="w"/><span class="k">public</span><span class="w"/><span class="k">TO</span><span class="w"/><span class="n">dbuser</span><span class="p">;</span><span class="w"/></span></span><span class="line"><span class="cl"><span class="w"/><span class="k">ALTER</span><span class="w"/><span class="k">ROLE</span><span class="w"/><span class="n">dbuser</span><span class="w"/><span class="k">SET</span><span class="w"/><span class="n">default_text_search_config</span><span class="w"/><span class="o">=</span><span class="w"/><span class="s1">'pg_catalog.english'</span><span class="p">;</span><span class="w"/></span></span></code></pre></div><p>Ultimately, this fixed the problem for me and was a simple fix. Because I spent way too much time googling for an answer, though, hopefully this post will help someone else identify the problem and get to a solution a bit faster.</p>
]]></content:encoded><category>til</category><category>postgresql</category><enclosure url="https://wiki.postgresql.org/images/a/a4/PostgreSQL_logo.3colors.svg" type="image/jpeg"/></item><item><title>How to remove *some* duplicates from a dataframe</title><link>https://vdavez.com/2024/01/how-to-remove-some-duplicates-from-a-dataframe/</link><pubDate>Sun, 28 Jan 2024 00:00:00 -0600</pubDate><guid>https://vdavez.com/2024/01/how-to-remove-some-duplicates-from-a-dataframe/</guid><description>&lt;p>I have a dataframe that has contains duplicates. And although I wanted to get rid of many of the duplicates, I &lt;em>also&lt;/em> wanted to keep some duplicates on certain conditions. In this post, I outline the motivating problem and the solution (Note: I&amp;rsquo;m using &lt;a href="https://pola.rs">polars&lt;/a> here but you could use pandas/etc for your dataframe).&lt;/p>
&lt;h2 id="the-motivating-problem">The motivating problem&lt;/h2>
&lt;p>To illustrate, here&amp;rsquo;s a random dataframe of foods, their categories, and some &amp;ldquo;last_date&amp;rdquo; field.&lt;/p></description><content:encoded>&lt;![CDATA[<p>I have a dataframe that has contains duplicates. And although I wanted to get rid of many of the duplicates, I<em>also</em> wanted to keep some duplicates on certain conditions. In this post, I outline the motivating problem and the solution (Note: I&rsquo;m using<a href="https://pola.rs">polars</a> here but you could use pandas/etc for your dataframe).</p><h2 id="the-motivating-problem">The motivating problem</h2><p>To illustrate, here&rsquo;s a random dataframe of foods, their categories, and some &ldquo;last_date&rdquo; field.</p><table><thead><tr><th>food</th><th>food_type</th><th>last_date</th></tr></thead><tbody><tr><td>carrot</td><td>vegetable</td><td>2024-01-28</td></tr><tr><td>lettuce</td><td>vegetable</td><td>2024-01-28</td></tr><tr><td>lettuce</td><td>vegetable</td><td>2024-01-27</td></tr><tr><td>apple</td><td>fruit</td><td>2024-01-28</td></tr><tr><td>banana</td><td>fruit</td><td>2024-01-28</td></tr><tr><td>banana</td><td>fruit</td><td>2024-01-27</td></tr><tr><td>salmon</td><td>seafood</td><td>2024-01-28</td></tr><tr><td>salmon</td><td>seafood</td><td>2024-01-27</td></tr></tbody></table><p>What I<em>want</em> in this example here is to keep both lettuces but discard the oldest fruit. In other words, I want my final table to look like this:</p><table><thead><tr><th>food</th><th>food_type</th><th>last_date</th></tr></thead><tbody><tr><td>carrot</td><td>vegetable</td><td>2024-01-28</td></tr><tr><td>lettuce</td><td>vegetable</td><td>2024-01-28</td></tr><tr><td>lettuce</td><td>vegetable</td><td>2024-01-27</td></tr><tr><td>apple</td><td>fruit</td><td>2024-01-28</td></tr><tr><td>banana</td><td>fruit</td><td>2024-01-28</td></tr><tr><td>salmon</td><td>seafood</td><td>2024-01-28</td></tr></tbody></table><p>See how there are two fewer rows? The second banana and the second salmon entries drop off, but the second lettuce entry stays.</p><h2 id="how-can-you-do-that">How can you do that?</h2><p>The way to reason about this is that you&rsquo;re going to (1) split the dataframe into two separate dataframes, (2) do your deduplication on one of them, and then (3) concatenate them into a new dataframe. Here&rsquo;s some code:</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="c1"># Sort the dataframe</span></span></span><span class="line"><span class="cl"><span class="n">df_sorted</span><span class="o">=</span><span class="n">df</span><span class="o">.</span><span class="n">sort</span><span class="p">([</span><span class="s2">"last_date"</span><span class="p">],</span><span class="n">descending</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># Split the dataframe into two dataframes, one with vegetables and one without</span></span></span><span class="line"><span class="cl"><span class="n">df_vegetables</span><span class="o">=</span><span class="n">df_sorted</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="n">df_sorted</span><span class="p">[</span><span class="s2">"food_type"</span><span class="p">]</span><span class="o">==</span><span class="s2">"vegetable"</span><span class="p">)</span></span></span><span class="line"><span class="cl"><span class="n">df_others</span><span class="o">=</span><span class="n">df_sorted</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="n">df_sorted</span><span class="p">[</span><span class="s2">"food_type"</span><span class="p">]</span><span class="o">!=</span><span class="s2">"vegetable"</span><span class="p">)</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># Drop the duplicates from the dataframe that does not have vegetables in it</span></span></span><span class="line"><span class="cl"><span class="n">df_others_latest</span><span class="o">=</span><span class="n">df_others</span><span class="o">.</span><span class="n">unique</span><span class="p">(</span><span class="n">subset</span><span class="o">=</span><span class="p">[</span><span class="s2">"food"</span><span class="p">])</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># Combine the dataframes again, now with the dupes dropped</span></span></span><span class="line"><span class="cl"><span class="n">final_df</span><span class="o">=</span><span class="n">pl</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">df_vegetables</span><span class="p">,</span><span class="n">df_others_latest</span><span class="p">])</span></span></span></code></pre></div><p>And voila, a conditionally deduped dataframe.</p><p>Is there a better way to do this? Perhaps! But it worked for me!</p><h2 id="ai-disclosure">AI Disclosure</h2><p>ChatGPT helped me think through a solution here when I got stuck. Hopefully this is the best way to do it, but if there&rsquo;s a better way, please let me know and I&rsquo;ll update the blog with a better solution.</p>
]]></content:encoded><category>til</category><enclosure url="https://raw.githubusercontent.com/pola-rs/polars-static/master/logos/polars_github_logo_rect_dark_name.svg" type="image/jpeg"/></item><item><title>How to use `scan_csv` with a file-like object in Polars</title><link>https://vdavez.com/2024/01/how-to-use-scan_csv-with-a-file-like-object-in-polars/</link><pubDate>Fri, 19 Jan 2024 00:00:00 -0600</pubDate><guid>https://vdavez.com/2024/01/how-to-use-scan_csv-with-a-file-like-object-in-polars/</guid><description>&lt;p>I have a case where a bunch of CSVs are stored together in a zip file and I want to convert those CSVs into a parquet file. I&amp;rsquo;m using &lt;a href="https://pola.rs">polars&lt;/a> because it has an awesome ability to lazily read CSVs and then efficiently sink to parquet. It&amp;rsquo;s actually kind of magical.&lt;/p>
&lt;p>But, there&amp;rsquo;s a problem. Because the CSVs are in a zipfile, you hit a snag pretty quick. That&amp;rsquo;s because you can&amp;rsquo;t just pass the CSV file name to the &lt;code>scan_csv&lt;/code> function. The following code will &lt;em>not&lt;/em> work!&lt;/p></description><content:encoded>&lt;![CDATA[<p>I have a case where a bunch of CSVs are stored together in a zip file and I want to convert those CSVs into a parquet file. I&rsquo;m using<a href="https://pola.rs">polars</a> because it has an awesome ability to lazily read CSVs and then efficiently sink to parquet. It&rsquo;s actually kind of magical.</p><p>But, there&rsquo;s a problem. Because the CSVs are in a zipfile, you hit a snag pretty quick. That&rsquo;s because you can&rsquo;t just pass the CSV file name to the<code>scan_csv</code> function. The following code will<em>not</em> work!</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span><span class="nn">polars</span><span class="k">as</span><span class="nn">pl</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="k">with</span><span class="n">zip_file</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="s2">"csv_in_zipfolder.csv"</span><span class="p">)</span><span class="k">as</span><span class="n">csv_file</span><span class="p">:</span></span></span><span class="line"><span class="cl"><span class="n">pl</span><span class="o">.</span><span class="n">scan_csv</span><span class="p">(</span><span class="n">csv_file</span><span class="p">)</span><span class="o">.</span><span class="n">sink_parquet</span><span class="p">(</span><span class="s2">"my_new_file.parquet"</span><span class="p">)</span></span></span></code></pre></div><p>That&rsquo;s because<code>csv_file</code> is actually a<code>ZipExtFile</code>, and the<code>scan_csv</code> function can&rsquo;t accept that! According to the<a href="https://docs.pola.rs/py-polars/html/reference/api/polars.scan_csv.html">pola.rs API documentation</a>,<code>scan_csv</code> only accepts a path to a file. Unlike the<code>read_csv</code> function, which accepts a path<em>or</em> a file-like objects,<code>scan_csv</code> does not allow file-like objects.</p><p>This also means that attempting to download from a URL directly into<code>scan_csv</code> won&rsquo;t work either. Bummer, right?</p><p>But, there&rsquo;s a hack if your csv file will fit in memory*: write it to a temporary named file and then pass that temporary named file to the<code>scan_csv</code> function. Here&rsquo;s how that looks:</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span><span class="nn">polars</span><span class="k">as</span><span class="nn">pl</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="k">with</span><span class="n">zip_file</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="s2">"csv_in_zipfolder.csv"</span><span class="p">)</span><span class="k">as</span><span class="n">csv_file</span><span class="p">:</span></span></span><span class="line"><span class="cl"><span class="c1"># Create the temporary file</span></span></span><span class="line"><span class="cl"><span class="k">with</span><span class="n">tempfile</span><span class="o">.</span><span class="n">NamedTemporaryFile</span><span class="p">()</span><span class="k">as</span><span class="n">tf</span><span class="p">:</span></span></span><span class="line"><span class="cl"><span class="n">tf</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">fp</span><span class="o">.</span><span class="n">read</span><span class="p">())</span><span class="c1"># Write the csv file to the temporary file</span></span></span><span class="line"><span class="cl"><span class="n">tf</span><span class="o">.</span><span class="n">seek</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span><span class="c1"># Start at the beginning of the temporary file</span></span></span><span class="line"><span class="cl"><span class="n">pl</span><span class="o">.</span><span class="n">scan_csv</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">name</span><span class="p">)</span><span class="o">.</span><span class="n">sink_parquet</span><span class="p">(</span><span class="s2">"my_new_file.parquet"</span><span class="p">)</span></span></span></code></pre></div><p>By saving the file-like object into the temporary file system as a temporary file, you can happily pass the path to that file to polars and scan to your heart&rsquo;s content.</p><p>* Technically, you could even iterate over the lines of your CSV file if it doesn&rsquo;t fit into memory at all.</p>
]]></content:encoded><category>til</category><enclosure url="https://raw.githubusercontent.com/pola-rs/polars-static/master/logos/polars_github_logo_rect_dark_name.svg" type="image/jpeg"/></item><item><title>Using python map() instead of pandas .apply()</title><link>https://vdavez.com/2024/01/using-python-map-instead-of-pandas-.apply/</link><pubDate>Sun, 14 Jan 2024 00:00:00 -0600</pubDate><guid>https://vdavez.com/2024/01/using-python-map-instead-of-pandas-.apply/</guid><description>&lt;p>Recently, I was playing with a rather large dataset using &lt;a href="pandas.pydata.org/">pandas&lt;/a> and trying to improve the performance of my code. While reaching for the &lt;a href="https://docs.python.org/3/library/multiprocessing.html">&lt;code>multiprocessing&lt;/code> library&lt;/a>, I learned about one small way to improve performance &lt;em>and&lt;/em> improve the readability of my code: use &lt;code>map&lt;/code> instead of &lt;code>apply&lt;/code>.&lt;/p>
&lt;p>Let&amp;rsquo;s look at some code:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-python" data-lang="python">&lt;span class="line">&lt;span class="cl">&lt;span class="kn">import&lt;/span> &lt;span class="nn">numpy&lt;/span> &lt;span class="k">as&lt;/span> &lt;span class="nn">np&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="kn">import&lt;/span> &lt;span class="nn">pandas&lt;/span> &lt;span class="k">as&lt;/span> &lt;span class="nn">pd&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="c1"># Let&amp;#39;s create a random dataframe&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="n">df&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">pd&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">DataFrame&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="n">np&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">random&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">randint&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="mi">0&lt;/span>&lt;span class="p">,&lt;/span>&lt;span class="mi">100&lt;/span>&lt;span class="p">,&lt;/span>&lt;span class="n">size&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="mi">10&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="mi">3&lt;/span>&lt;span class="p">)),&lt;/span> &lt;span class="n">columns&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="nb">list&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s1">&amp;#39;ABC&amp;#39;&lt;/span>&lt;span class="p">))&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="c1"># Normal approach: Use .apply() to iterate through the rows&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="n">df&lt;/span>&lt;span class="p">[&lt;/span>&lt;span class="s2">&amp;#34;D&amp;#34;&lt;/span>&lt;span class="p">]&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">df&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">apply&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="k">lambda&lt;/span> &lt;span class="n">x&lt;/span>&lt;span class="p">:&lt;/span> &lt;span class="n">x&lt;/span>&lt;span class="p">[&lt;/span>&lt;span class="s2">&amp;#34;A&amp;#34;&lt;/span>&lt;span class="p">]&lt;/span> &lt;span class="o">**&lt;/span> &lt;span class="mi">2&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">axis&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="mi">1&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="c1"># The new approach: Use python map() to iterate through the rows&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">def&lt;/span> &lt;span class="nf">power&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="n">x&lt;/span>&lt;span class="p">):&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="k">return&lt;/span> &lt;span class="n">x&lt;/span> &lt;span class="o">**&lt;/span> &lt;span class="mi">2&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="n">df&lt;/span>&lt;span class="p">[&lt;/span>&lt;span class="s2">&amp;#34;E&amp;#34;&lt;/span>&lt;span class="p">]&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="nb">list&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="nb">map&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="n">power&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">df&lt;/span>&lt;span class="p">[&lt;/span>&lt;span class="s2">&amp;#34;A&amp;#34;&lt;/span>&lt;span class="p">]))&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The new code is admittedly more verbose (requires three lines instead of one), but there are two advantages of this. Let&amp;rsquo;s start with the advantage that motivated it.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Recently, I was playing with a rather large dataset using<a href="pandas.pydata.org/">pandas</a> and trying to improve the performance of my code. While reaching for the<a href="https://docs.python.org/3/library/multiprocessing.html"><code>multiprocessing</code> library</a>, I learned about one small way to improve performance<em>and</em> improve the readability of my code: use<code>map</code> instead of<code>apply</code>.</p><p>Let&rsquo;s look at some code:</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span><span class="nn">numpy</span><span class="k">as</span><span class="nn">np</span></span></span><span class="line"><span class="cl"><span class="kn">import</span><span class="nn">pandas</span><span class="k">as</span><span class="nn">pd</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># Let's create a random dataframe</span></span></span><span class="line"><span class="cl"><span class="n">df</span><span class="o">=</span><span class="n">pd</span><span class="o">.</span><span class="n">DataFrame</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">100</span><span class="p">,</span><span class="n">size</span><span class="o">=</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span><span class="mi">3</span><span class="p">)),</span><span class="n">columns</span><span class="o">=</span><span class="nb">list</span><span class="p">(</span><span class="s1">'ABC'</span><span class="p">))</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># Normal approach: Use .apply() to iterate through the rows</span></span></span><span class="line"><span class="cl"><span class="n">df</span><span class="p">[</span><span class="s2">"D"</span><span class="p">]</span><span class="o">=</span><span class="n">df</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span><span class="n">x</span><span class="p">:</span><span class="n">x</span><span class="p">[</span><span class="s2">"A"</span><span class="p">]</span><span class="o">**</span><span class="mi">2</span><span class="p">,</span><span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># The new approach: Use python map() to iterate through the rows</span></span></span><span class="line"><span class="cl"><span class="k">def</span><span class="nf">power</span><span class="p">(</span><span class="n">x</span><span class="p">):</span></span></span><span class="line"><span class="cl"><span class="k">return</span><span class="n">x</span><span class="o">**</span><span class="mi">2</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="n">df</span><span class="p">[</span><span class="s2">"E"</span><span class="p">]</span><span class="o">=</span><span class="nb">list</span><span class="p">(</span><span class="nb">map</span><span class="p">(</span><span class="n">power</span><span class="p">,</span><span class="n">df</span><span class="p">[</span><span class="s2">"A"</span><span class="p">]))</span></span></span></code></pre></div><p>The new code is admittedly more verbose (requires three lines instead of one), but there are two advantages of this. Let&rsquo;s start with the advantage that motivated it.</p><h2 id="adding-multiprocessing-is-trivial-with-this-pattern">Adding multiprocessing is trivial with this pattern</h2><p>Now, instead of trying to figure out where / how you&rsquo;re going to handle multiprocessing in your code, you can simply just replace<code>map</code> with<code>pool.map</code>. Let&rsquo;s see it in action.</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span><span class="nn">multiprocessing</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="c1"># Pro tip... you're going to want to make sure this has one of those</span></span></span><span class="line"><span class="cl"><span class="c1"># `if __name__ == "__main__":` things in front if you're using</span></span></span><span class="line"><span class="cl"><span class="c1"># multiprocessing.</span></span></span><span class="line"><span class="cl"><span class="k">with</span><span class="n">multiprocessing</span><span class="o">.</span><span class="n">Pool</span><span class="p">()</span><span class="k">as</span><span class="n">pool</span><span class="p">:</span></span></span><span class="line"><span class="cl"><span class="n">df</span><span class="p">[</span><span class="s2">"F"</span><span class="p">]</span><span class="o">=</span><span class="nb">list</span><span class="p">(</span><span class="n">pool</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="n">power</span><span class="p">,</span><span class="n">df</span><span class="p">[</span><span class="s2">"A"</span><span class="p">]))</span></span></span></code></pre></div><p>See? Just a slight tweak in the code and suddenly you&rsquo;re using all the cores.</p><h2 id="you-get-a-performance-boost-without-any-multiprocessing">You get a performance boost without any multiprocessing</h2><p>This one kind of surprised me, tbh. I tried to see whether, without any multiprocessing, I&rsquo;d get a speed bump</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="o">%</span><span class="n">timeit</span><span class="n">df</span><span class="p">[</span><span class="s2">"D"</span><span class="p">]</span><span class="o">=</span><span class="n">df</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span><span class="n">x</span><span class="p">:</span><span class="n">x</span><span class="p">[</span><span class="s2">"A"</span><span class="p">]</span><span class="o">**</span><span class="mi">2</span><span class="p">,</span><span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span></span></span><span class="line"><span class="cl"><span class="c1"># 153 µs ± 951 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)</span></span></span><span class="line"><span class="cl"/></span><span class="line"><span class="cl"><span class="o">%</span><span class="n">timeit</span><span class="n">df</span><span class="p">[</span><span class="s2">"E"</span><span class="p">]</span><span class="o">=</span><span class="nb">list</span><span class="p">(</span><span class="nb">map</span><span class="p">(</span><span class="n">power</span><span class="p">,</span><span class="n">df</span><span class="p">[</span><span class="s2">"A"</span><span class="p">]))</span></span></span><span class="line"><span class="cl"><span class="c1"># 30.7 µs ± 119 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)</span></span></span></code></pre></div><p>Sure enough: a<strong>5x speed increase</strong>!</p><p>I&rsquo;m not honestly sure<em>why</em> this happens; it&rsquo;s probably some weird thing where python does it better than pandas. But hey, a 5x improvement is notable!</p><p>Have fun mapping!</p>
]]></content:encoded><category>til</category><enclosure url="https://vdavez.com/images/more_you_know.png" type="image/jpeg"/></item><item><title>TIL about launchd.plists</title><link>https://vdavez.com/2024/01/til-about-launchd.plists/</link><pubDate>Wed, 10 Jan 2024 00:00:00 -0600</pubDate><guid>https://vdavez.com/2024/01/til-about-launchd.plists/</guid><description>&lt;p>I created a Django app that runs on my computer. Most of the time, I don&amp;rsquo;t really think about running it as a background service because it&amp;rsquo;s not a production application. But I decided recently that I &lt;em>actually wanted&lt;/em> it to be a background service on my Apple computer. And even though I&amp;rsquo;m not sure this is the &lt;em>right way&lt;/em> to do it, I&amp;rsquo;m here to tell you it works!&lt;/p></description><content:encoded>&lt;![CDATA[<p>I created a Django app that runs on my computer. Most of the time, I don&rsquo;t really think about running it as a background service because it&rsquo;s not a production application. But I decided recently that I<em>actually wanted</em> it to be a background service on my Apple computer. And even though I&rsquo;m not sure this is the<em>right way</em> to do it, I&rsquo;m here to tell you it works!</p><p>First, you create a<code>.plist</code> file, let&rsquo;s call it<code>com.vdavez.app.service.plist</code>.</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?></span></span></span><span class="line"><span class="cl"><span class="cp">&lt;!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;plist</span><span class="na">version=</span><span class="s">"1.0"</span><span class="nt">></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;dict></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;key></span>RunAtLoad<span class="nt">&lt;/key></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;true/></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;key></span>KeepAlive<span class="nt">&lt;/key></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;true/></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;key></span>Label<span class="nt">&lt;/key></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;string></span>Dave's Django App Agent<span class="nt">&lt;/string></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;key></span>ProgramArguments<span class="nt">&lt;/key></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;array></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;string></span>/Users/vdavez/.local/bin/poetry<span class="nt">&lt;/string></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;string></span>run<span class="nt">&lt;/string></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;string></span>gunicorn<span class="nt">&lt;/string></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;string></span>myapp:app<span class="nt">&lt;/string></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;/array></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;key></span>WorkingDirectory<span class="nt">&lt;/key></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;string></span>/Users/vdavez/app<span class="nt">&lt;/string></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;/dict></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;/plist></span></span></span></code></pre></div><p>The key things to pay attention to here are the<code>ProgramArguments</code> and the<code>WorkingDirectory</code> tags. Within ProgramArguments are the script you want to run (taking care to have the first string be an absolute path). Then, WorkingDirectory is where you want the script to run.</p><p>I&rsquo;m using poetry to manage the virtual environment, so it&rsquo;s just calling<code>poetry run gunicorn myapp:app</code> as if I were doing it in the appropriate directory.</p><p>Next, save this file in<code>~/Library/LaunchAgents</code>, and run<code>launchctl load com.vdavez.app.service.plist</code>. Now, it just works!</p><p>Hope someone finds this useful background material. 🥁</p>
]]></content:encoded><category>til</category><enclosure url="https://vdavez.com/images/more_you_know.png" type="image/jpeg"/></item><item><title>Computers, Robots, and Erasure</title><link>https://vdavez.com/2023/01/computers-robots-and-erasure/</link><pubDate>Tue, 10 Jan 2023 00:00:00 -0600</pubDate><guid>https://vdavez.com/2023/01/computers-robots-and-erasure/</guid><description>&lt;figure class="grid justify-center object-cover">&lt;a href="#ZgotmplZ">
&lt;img loading="lazy" src="#ZgotmplZ"
alt="Women computer programmers working with an ENIAC" width="300" height="300"/> &lt;/a>
&lt;/figure>
&lt;p>What do you think of when you see the word &amp;ldquo;computer&amp;rdquo;? Do you think of a human?&lt;/p>
&lt;p>In the 21st century, the answer is obviously not; a computer is decidedly &lt;em>not&lt;/em> a human. But at the beginning of the 20th century, you would. And, likely, &lt;a href="https://www.womenshistory.org/articles/women-and-computing">you&amp;rsquo;d think of a woman&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>In the mid-1950s, Kathleen Wicker received an assignment at work with one of the first electronic computers at the National Aeronautics and Space Administration (NASA). &amp;ldquo;At the time, I hardly knew what a computer was,&amp;rdquo; she recalled in an interview 40 years later. &amp;ldquo;I thought it was a woman.&amp;rdquo;&lt;/p></description><content:encoded>&lt;![CDATA[<figure class="grid justify-center object-cover"><a href="#ZgotmplZ"><img loading="lazy" src="#ZgotmplZ" alt="Women computer programmers working with an ENIAC" width="300" height="300"/></a></figure><p>What do you think of when you see the word &ldquo;computer&rdquo;? Do you think of a human?</p><p>In the 21st century, the answer is obviously not; a computer is decidedly<em>not</em> a human. But at the beginning of the 20th century, you would. And, likely,<a href="https://www.womenshistory.org/articles/women-and-computing">you&rsquo;d think of a woman</a>:</p><blockquote><p>In the mid-1950s, Kathleen Wicker received an assignment at work with one of the first electronic computers at the National Aeronautics and Space Administration (NASA). &ldquo;At the time, I hardly knew what a computer was,&rdquo; she recalled in an interview 40 years later. &ldquo;I thought it was a woman.&rdquo;</p><p>Wicker&rsquo;s statement might seem surprising today, but in the mid-twentieth century, the term &ldquo;computer&rdquo; referred to people, not machines. It was a job title, describing someone who performed mathematical equations and calculations, and NASA employed hundreds of women as &ldquo;human computers&rdquo; in this era.</p></blockquote><p>So, in just about 50 years, the use of the word &ldquo;computer&rdquo; came to mean the opposite of a human.</p><p>What about the word &ldquo;robot&rdquo;? Do you think of a human?</p><p>In the 21st century among English speakers, the answer is again decidedly<em>not</em> a human. This, too, was a<a href="https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/">20th century innovation</a>:</p><blockquote><p>Robot is drawn from an old Church Slavonic word, robota, for &ldquo;servitude,&rdquo; &ldquo;forced labor&rdquo; or &ldquo;drudgery.&rdquo; The word, which also has cognates in German, Russian, Polish and Czech, was a product of the central European system of serfdom by which a tenant&rsquo;s rent was paid for in forced labor or service.</p></blockquote><p>Today, in Russian, рабочий (pronounced &ldquo;rabochee&rdquo;) means &ldquo;worker&rdquo; and работа (pronounced &ldquo;rabota&rdquo;) means &ldquo;work&rdquo;. So, how did this eastern European word referring to manual labor (specifically serfs) come to mean the opposite?</p><p>It owes its origin to &ldquo;a brilliant Czech playwright, novelist and journalist named Karel Čapek (1880-1938) who introduced it in his 1920 hit play, R.U.R., or Rossum&rsquo;s Universal Robots.&rdquo;</p><blockquote><p>Taking its cues from other literary accounts of scientifically created life forms such as Mary Shelley&rsquo;s classic Frankenstein and the Yiddish-Czech legend The Golem,<em>R.U.R.</em> tells the story of a company using the latest biology, chemistry and physiology to mass produce workers who &ldquo;lack nothing but a soul.&rdquo; The robots perform all the work that humans preferred not to do and, soon, the company is inundated with orders. In early drafts of his play, Čapek named these creatures<em>labori</em>, after the Latinat root for<em>labor</em>, but worried that the term sounded too &ldquo;bookish.&rdquo; At the suggestion of his brother, Josef, Čapek ultimately opted for<em>roboti</em>, or in English, robots.</p><p>In the play&rsquo;s final act, the robots revolt against their human creators. After killing most of the people living on the planet, the robots realize they need humans because none of them can figure out the means to manufacture more robots a secret that dies out with the last human being. In the end, there is a deus ex machina moment, when two robots somehow acquire the human traits of love and compassion and go off into the sunset to make the world anew.</p><p>Audiences loved the play across Europe and the United States.</p></blockquote><p>In 100 years, robots went from human laborers at the bottom of the social order to being literally inhuman.</p><p>Beyond being etymologically interesting that the words &ldquo;computer&rdquo; and &ldquo;robot&rdquo; have had their meanings inverted, I can&rsquo;t help but wonder about the throughline: that the altered words had the effect of erasing and dehumanizing the labor of women and workers.</p><p>I also can&rsquo;t help but wonder whether the early 20th century fear of a robot rebellion in the United States echoes the late 18th century fears of slave revolts in the United States.</p><p>All of which raises the question of what words do we use today to refer to humans that will mean the opposite 100 years from now? For example, consider modern-day anxieties around artificial intelligence. Already we have seen how &ldquo;neural networks&rdquo;, which just a few years ago would automatically evoke human brains, have become commonly associated with machine learning and automation.</p><p>It&rsquo;s also worth considering what an alternative history might have been where, instead of the language of technology being deployed to erase the contributions of women and workers and replace humans, our language evolved in a way that elevated their humanity.</p><p>In writing this, I wish I had a slightly stronger thesis about what it is about our culture that caused these two specific inversions, and a stronger suggestion about how to properly think about them. But in the meantime, it&rsquo;s worth reflecting on the power of words, even as they change over time.</p>
]]></content:encoded><enclosure url="https://www.womenshistory.org/sites/default/files/images/2018-04/ComputerLesson_Card.png" type="image/jpeg"/></item><item><title>War Ration Books and Plain Language</title><link>https://vdavez.com/2022/12/war-ration-books-and-plain-language/</link><pubDate>Sat, 24 Dec 2022 00:00:00 -0600</pubDate><guid>https://vdavez.com/2022/12/war-ration-books-and-plain-language/</guid><description>&lt;figure>
&lt;img loading="lazy" src="https://vdavez.com/images/war_ration_books_instructions.jpg"
alt="A photograph of a War Ration Book&amp;#39;s instructions"/>
&lt;/figure>
&lt;p>At the onset of World War II, the US government established a &lt;a href="https://www.nationalww2museum.org/war/articles/rationing">system of rationing foods and supplies&lt;/a>. As part of that system, individuals received &amp;ldquo;War Ration Books&amp;rdquo; with stamps that could be exchanged for rationed goods.&lt;/p>
&lt;p>The other day, a family member discovered a handful of these books in another family member&amp;rsquo;s home and I was struck by the instructions on the back of one.&lt;/p></description><content:encoded>&lt;![CDATA[<figure><img loading="lazy" src="/images/war_ration_books_instructions.jpg" alt="A photograph of a War Ration Book's instructions"/></figure><p>At the onset of World War II, the US government established a<a href="https://www.nationalww2museum.org/war/articles/rationing">system of rationing foods and supplies</a>. As part of that system, individuals received &ldquo;War Ration Books&rdquo; with stamps that could be exchanged for rationed goods.</p><p>The other day, a family member discovered a handful of these books in another family member&rsquo;s home and I was struck by the instructions on the back of one.</p><p>On the top, the book has straightforward, directive prose:</p><blockquote><ol><li>This book is valuable. Do not lose it.</li><li>Each stamp authorizes you to purchase rationed goods in the quantities and at the times designated by the Office of Price Administration. Without the stamps you will be unable to purchase those goods.</li><li>Detailed instructions concerning the use of the book and the stamps will be issued. Watch for those instructions so that you will know how to use your book and stamps. Your Local War Price and Rationing Board can give you full information.</li><li>Do not throw this book away when all of the stamps have been used, or when the time for their use has expired. You may be required to present this book when you apply for subsequent books.</li></ol></blockquote><p>On the bottom, the book borders on propaganda:</p><blockquote><p>Rationing is a vital part of your country&rsquo;s war effort. Any attempt to violate the rules is an effort to deny someone his share and will create hardship and help the enemy.</p><p>This book is your Government&rsquo;s assurance of your right to buy your fair share of certain goods made scarce by war. Price ceilings have also been established for your protection. Dealers must post these prices conspicuously. Don&rsquo;t pay more.</p><p>Give your whole support to rationing and thereby conserve our vital goods.</p><p>Be guided by the rule: &ldquo;If you don&rsquo;t need it, DON&rsquo;T BUY IT.&rdquo;</p></blockquote><p>I&rsquo;m fascinated by these instructions because they seem to be an excellent example of the use of<a href="https://www.plainlanguage.gov">plain language in government</a>.</p><p>The instructions write for the audience. These books were intended for distribution to the general public (though<a href="https://www.womenshistory.org/articles/food-rationing-and-canning-world-war-ii">women were the primary users</a>) and used simple, straightforward words and short, concise sentences (e.g., it has a Flesch-Kincaid Grade Level of 5.7). It pulls the reader in quickly (&ldquo;This book is valuable.&rdquo;). There&rsquo;s no jargon here (though I could quibble about whether the Office of Price Administration needed to be named specifically). There are no frills.</p><p>The instructions are organized well, using a numbered list at the top, and prose near the bottom. It uses strong typography, mixing use of italics and capitalization to highlight the main points.</p><p>And the instructions tell a story. War Ration Books, which were fundamentally about limiting people&rsquo;s ability to access basic commodities, were framed around language of civic duty, fairness, and service. The instructions directly connected the use of the stamps to the war effort.</p><p>In short, these instructions are an interesting historic example of the government taking plain language seriously and how good content design can make a significant impact.</p>
]]></content:encoded><enclosure url="https://vdavez.com/images/war_ration_books_instructions.jpg" type="image/jpeg"/></item><item><title>About my habits.sh</title><link>https://vdavez.com/2022/12/about-my-habits.sh/</link><pubDate>Wed, 21 Dec 2022 00:00:00 -0600</pubDate><guid>https://vdavez.com/2022/12/about-my-habits.sh/</guid><description>&lt;figure>&lt;a href="#ZgotmplZ">
&lt;img loading="lazy" src="#ZgotmplZ"
alt="A gif demo of my habits tracker"/> &lt;/a>
&lt;/figure>
&lt;p>As we enter the new year, it&amp;rsquo;s time to think about &lt;a href="https://vdavez.com/2021/01/habits-like-company/">habits&lt;/a>! As I&amp;rsquo;ve written before, &amp;ldquo;habits are a product of the environment in which you operate. And if you make your habits part of that environment, they’re more likely to stick.&amp;rdquo;&lt;/p>
&lt;p>With this in mind, I created a new command-line interface application called &lt;a href="https://github.com/vdavez/habits">habits.sh&lt;/a>. It&amp;rsquo;s extremely simple; you configure a few things you want to track every day, run an initialization script, and then, every morning, you can get prompts to track those habits. Over the course of the day, I update my tracker with a single command: &amp;ldquo;habits&amp;rdquo;!&lt;/p></description><content:encoded>&lt;![CDATA[<figure><a href="#ZgotmplZ"><img loading="lazy" src="#ZgotmplZ" alt="A gif demo of my habits tracker"/></a></figure><p>As we enter the new year, it&rsquo;s time to think about<a href="/2021/01/habits-like-company/">habits</a>! As I&rsquo;ve written before, &ldquo;habits are a product of the environment in which you operate. And if you make your habits part of that environment, they’re more likely to stick.&rdquo;</p><p>With this in mind, I created a new command-line interface application called<a href="https://github.com/vdavez/habits">habits.sh</a>. It&rsquo;s extremely simple; you configure a few things you want to track every day, run an initialization script, and then, every morning, you can get prompts to track those habits. Over the course of the day, I update my tracker with a single command: &ldquo;habits&rdquo;!</p><p>I know there are probably hundreds (thousands?) of similar applications out there. But thinking a bit deeper about how I<em>personally</em> wanted to track my habits allowed me to build more of the environment I wanted to see in the world.</p><p>Do you track your habits? What do you track? Do you use something like this? What would make this better? Let me know what you think!</p>
]]></content:encoded><category>projects</category><enclosure url="https://github.com/vdavez/habits/raw/main/docs/demo.gif" type="image/jpeg"/></item><item><title>Output JSON (and other formats) from SQLite</title><link>https://vdavez.com/2022/12/output-json-and-other-formats-from-sqlite/</link><pubDate>Sat, 17 Dec 2022 00:00:00 -0600</pubDate><guid>https://vdavez.com/2022/12/output-json-and-other-formats-from-sqlite/</guid><description>&lt;figure>
&lt;img loading="lazy" src="https://vdavez.com/images/more_you_know.png"/>
&lt;/figure>
&lt;p>Here&amp;rsquo;s a fun experiment. Suppose you have a CSV and want to export it into JSON. Obviously, one way you could do it is through a purpose-built tool. But did you know you can use SQLite directly? Let&amp;rsquo;s try it.&lt;/p>
&lt;p>Suppose you have the following CSV saved (cleverly) as &amp;ldquo;in.csv&amp;rdquo;:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-csv" data-lang="csv">&lt;span class="line">&lt;span class="cl">&lt;span class="s">name&lt;/span>&lt;span class="p">,&lt;/span>&lt;span class="s">age&lt;/span>&lt;span class="p">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">&lt;/span>&lt;span class="s">Jane Doe&lt;/span>&lt;span class="p">,&lt;/span>&lt;span class="s">25&lt;/span>&lt;span class="p">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">&lt;/span>&lt;span class="s">William Shakespeare&lt;/span>&lt;span class="p">,&lt;/span>&lt;span class="s"> 40&lt;/span>&lt;span class="p">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>You can run the following script at the command line:&lt;/p>
&lt;div class="highlight">&lt;div class="chroma">
&lt;table class="lntable">&lt;tr>&lt;td class="lntd">
&lt;pre tabindex="0" class="chroma">&lt;code>&lt;span class="lnt">1
&lt;/span>&lt;span class="lnt">2
&lt;/span>&lt;span class="lnt">3
&lt;/span>&lt;span class="lnt">4
&lt;/span>&lt;span class="lnt">5
&lt;/span>&lt;span class="lnt">6
&lt;/span>&lt;/code>&lt;/pre>&lt;/td>
&lt;td class="lntd">
&lt;pre tabindex="0" class="chroma">&lt;code class="language-sh" data-lang="sh">&lt;span class="line">&lt;span class="cl">sqlite3 /tmp/test.db &lt;span class="se">\
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="se">&lt;/span> &lt;span class="s1">&amp;#39;.mode csv&amp;#39;&lt;/span> &lt;span class="se">\
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="se">&lt;/span> &lt;span class="s1">&amp;#39;.import in.csv temp&amp;#39;&lt;/span> &lt;span class="se">\
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="se">&lt;/span> &lt;span class="s1">&amp;#39;.mode json&amp;#39;&lt;/span> &lt;span class="se">\
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="se">&lt;/span> &lt;span class="s1">&amp;#39;.once out.json&amp;#39;&lt;/span> &lt;span class="se">\
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="se">&lt;/span> &lt;span class="s1">&amp;#39;SELECT * from temp;&amp;#39;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/td>&lt;/tr>&lt;/table>
&lt;/div>
&lt;/div>&lt;p>And out you&amp;rsquo;ll get this saved in &amp;ldquo;out.json&amp;rdquo;!&lt;/p></description><content:encoded>&lt;![CDATA[<figure><img loading="lazy" src="/images/more_you_know.png"/></figure><p>Here&rsquo;s a fun experiment. Suppose you have a CSV and want to export it into JSON. Obviously, one way you could do it is through a purpose-built tool. But did you know you can use SQLite directly? Let&rsquo;s try it.</p><p>Suppose you have the following CSV saved (cleverly) as &ldquo;in.csv&rdquo;:</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-csv" data-lang="csv"><span class="line"><span class="cl"><span class="s">name</span><span class="p">,</span><span class="s">age</span><span class="p"/></span></span><span class="line"><span class="cl"><span class="p"/><span class="s">Jane Doe</span><span class="p">,</span><span class="s">25</span><span class="p"/></span></span><span class="line"><span class="cl"><span class="p"/><span class="s">William Shakespeare</span><span class="p">,</span><span class="s"> 40</span><span class="p"/></span></span></code></pre></div><p>You can run the following script at the command line:</p><div class="highlight"><div class="chroma"><table class="lntable"><tr><td class="lntd"><pre tabindex="0" class="chroma"><code><span class="lnt">1</span><span class="lnt">2</span><span class="lnt">3</span><span class="lnt">4</span><span class="lnt">5</span><span class="lnt">6</span></code></pre></td><td class="lntd"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">sqlite3 /tmp/test.db<span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'.mode csv'</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'.import in.csv temp'</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'.mode json'</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'.once out.json'</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'SELECT * from temp;'</span></span></span></code></pre></td></tr></table></div></div><p>And out you&rsquo;ll get this saved in &ldquo;out.json&rdquo;!</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">[</span></span></span><span class="line"><span class="cl"><span class="p">{</span><span class="nt">"name"</span><span class="p">:</span><span class="s2">"Jane Doe"</span><span class="p">,</span><span class="nt">"age"</span><span class="p">:</span><span class="s2">"25"</span><span class="p">},</span></span></span><span class="line"><span class="cl"><span class="p">{</span><span class="nt">"name"</span><span class="p">:</span><span class="s2">"William Shakespeare"</span><span class="p">,</span><span class="nt">"age"</span><span class="p">:</span><span class="s2">" 40"</span><span class="p">}</span></span></span><span class="line"><span class="cl"><span class="p">]</span></span></span></code></pre></div><p>What&rsquo;s happening here? Well, a couple of really cool things! You&rsquo;re exploiting the<code>.import</code> function of sqlite3 that allows importing csv files into sqlite. But even cooler, you&rsquo;re exploiting the<a href="https://sqlite.org/cli.html#changing_output_formats">output formats</a> of SQLite by exporting to JSON.</p><p>You can even omit line 5 and then go straight into<code>jq</code> or whatever format you&rsquo;re messing with!</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">sqlite3 /tmp/test.db<span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'.mode json'</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'SELECT * from temp;'</span><span class="p">|</span> jq .</span></span></code></pre></div><p>As a bonus, you can export into markdown, a table, or even HTML by changing the mode. Neat, huh?</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">sqlite3 /tmp/test.db<span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'.mode markdown'</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/><span class="s1">'SELECT * from temp;'</span></span></span></code></pre></div><p>Hat tip to<a href="https://stackoverflow.com/a/67186486">this answer on Stack Overflow</a> for teaching me the basic trick!</p><p>TIL!</p>
]]></content:encoded><category>til</category><enclosure url="https://vdavez.com/images/more_you_know.png" type="image/jpeg"/></item><item><title>Democracy's Data</title><link>https://vdavez.com/2022/12/democracys-data/</link><pubDate>Thu, 08 Dec 2022 00:00:00 -0600</pubDate><guid>https://vdavez.com/2022/12/democracys-data/</guid><description>&lt;figure class="grid justify-center object-cover">&lt;a href="#ZgotmplZ">
&lt;img loading="lazy" src="#ZgotmplZ"
alt="The cover of Dan Bouk&amp;#39;s book: Democracy&amp;#39;s Data" width="300" height="460"/> &lt;/a>
&lt;/figure>
&lt;blockquote>
&lt;p>There are stories in the data. You just have to know how to read them.&lt;/p>&lt;/blockquote>
&lt;p>So opens Dan Bouk&amp;rsquo;s &amp;ldquo;Democracy&amp;rsquo;s Data: The Hidden Stories about the U.S. Census and How to Read Them.&amp;rdquo; The book has &lt;a href="https://www.shroudedincloaksofboringness.com/democracysdata/">received incredibly positive reviews&lt;/a> and for good reason: it&amp;rsquo;s beautifully written and insightful scholarship.&lt;/p>
&lt;p>Bouk claims to specialize in studying &lt;a href="https://www.shroudedincloaksofboringness.com/">&amp;ldquo;modern things shrouded in cloaks of boringness&amp;rdquo;&lt;/a>. And he&amp;rsquo;s clearly very good at it.&lt;/p>
&lt;p>And yet, convincing you to read Bouk&amp;rsquo;s book is not the point of this post. Instead, I wanted to share that Bouk&amp;rsquo;s book caused me to reconsider a concept that is pretty familiar in the world of data: &amp;ldquo;data quality&amp;rdquo;.&lt;/p></description><content:encoded>&lt;![CDATA[<figure class="grid justify-center object-cover"><a href="#ZgotmplZ"><img loading="lazy" src="#ZgotmplZ" alt="The cover of Dan Bouk's book: Democracy's Data" width="300" height="460"/></a></figure><blockquote><p>There are stories in the data. You just have to know how to read them.</p></blockquote><p>So opens Dan Bouk&rsquo;s &ldquo;Democracy&rsquo;s Data: The Hidden Stories about the U.S. Census and How to Read Them.&rdquo; The book has<a href="https://www.shroudedincloaksofboringness.com/democracysdata/">received incredibly positive reviews</a> and for good reason: it&rsquo;s beautifully written and insightful scholarship.</p><p>Bouk claims to specialize in studying<a href="https://www.shroudedincloaksofboringness.com/">&ldquo;modern things shrouded in cloaks of boringness&rdquo;</a>. And he&rsquo;s clearly very good at it.</p><p>And yet, convincing you to read Bouk&rsquo;s book is not the point of this post. Instead, I wanted to share that Bouk&rsquo;s book caused me to reconsider a concept that is pretty familiar in the world of data: &ldquo;data quality&rdquo;.</p><p>Data quality is the sort of thing that people who deal with data think about a lot and talk a lot about. We&rsquo;ll observe that, when making decisions based on data, we need to make sure that we have &ldquo;good data&rdquo;. Bad data, on the other hand, causes people to knowingly observe &ldquo;garbage in, garbage out&rdquo;.</p><p>In the preface of his book, Bouk makes a statement that (at first blush) runs in the same direction about data quality: &ldquo;what is our democracy, if this is its data?&rdquo; As I started on the book, I imagined: &ldquo;yes, surely the book will reveal how an overconfident census leads to all kinds of political problems. Garbage in, garbage out, right?&rdquo; And, sure, I guess there&rsquo;s some of that in there.</p><p>But, reading<em>Democracy&rsquo;s Data</em> offers an alternative framing: behind any data are stories. When you read data, if you read it deeply and with dignity, you can learn greater truths about the motivations and goals across the data lifecycle.</p><p>Bouk&rsquo;s effort largely returns to the &ldquo;doorstep interactions&rdquo; of the decennial census and he described all of the challenges of capturing, classifying, and reporting the number of people in our country and facts about them. Issues of race, gender, politics, and more suffuse the &ldquo;simple&rdquo; counting of people on a given day. But, by reading in the data (and what was left out of the data), we better understand the complexities and richness of our nation. We see how people imagined and contributed to a better world, lived through fear and struggle, created beauty and art. We see how the &ldquo;mistakes&rdquo; of the counts &ndash; and the omissions from the counts &ndash; revealed and reflected people more accurately.</p><p>This book taught me, therefore, to remove the phrase &ldquo;garbage in, garbage out&rdquo; from my vocabulary when talking about data. Instead, when confronted with the inevitable &ldquo;data quality&rdquo; issues that affect any organization, I will try and approach the discussion with greater curiosity about the question: &ldquo;what is this organization, if this is its data?&rdquo;</p><p>Data, as it happens, is never really garbage. Rather, looking for the stories behind the data allows questions of data quality to be an entry point to study the motivations and actions of the collectors, the processors, the managers, the publishers. It may be messy (just like democracy), but only by better understanding it can we truly make progress.</p>]]></content:encoded><category>book review</category><category>data</category><category>government</category><enclosure url="https://mpd-biblio-covers.imgix.net/9780374602543.jpg?w=900" type="image/jpeg"/></item><item><title>Even and Odd Numbered Addresses</title><link>https://vdavez.com/2022/12/even-and-odd-numbered-addresses/</link><pubDate>Mon, 05 Dec 2022 10:47:23 -0600</pubDate><guid>https://vdavez.com/2022/12/even-and-odd-numbered-addresses/</guid><description>&lt;figure>
&lt;img loading="lazy" src="https://vdavez.com/images/more_you_know.png"/>
&lt;/figure>
&lt;p>Chances are, if you live in the United States, the buildings on one side of the street are all even or all odd, and the buildings on the opposite side of the street are the opposite.&lt;/p>
&lt;p>But &lt;em>why&lt;/em> is this the case? How did this come to pass?&lt;/p>
&lt;p>As it turns out, you can trace this convention back to &lt;a href="https://www.law.cornell.edu/constitution-conan/article-1/section-2/clause-3/enumeration-clause">Article I, Section 2, Clause 3 (the &amp;ldquo;Enumeration Clause&amp;rdquo;) of the US Constitution&lt;/a>, which established the United States Census, and to Philadelphia, which was the nation&amp;rsquo;s capital during the first national census!&lt;/p>
&lt;p>&lt;strong>That&amp;rsquo;s right! The odd/even house numbering convention owes its origin to the founding of the US constitution!&lt;/strong>&lt;/p></description><content:encoded>&lt;![CDATA[<figure><img loading="lazy" src="/images/more_you_know.png"/></figure><p>Chances are, if you live in the United States, the buildings on one side of the street are all even or all odd, and the buildings on the opposite side of the street are the opposite.</p><p>But<em>why</em> is this the case? How did this come to pass?</p><p>As it turns out, you can trace this convention back to<a href="https://www.law.cornell.edu/constitution-conan/article-1/section-2/clause-3/enumeration-clause">Article I, Section 2, Clause 3 (the &ldquo;Enumeration Clause&rdquo;) of the US Constitution</a>, which established the United States Census, and to Philadelphia, which was the nation&rsquo;s capital during the first national census!</p><p><strong>That&rsquo;s right! The odd/even house numbering convention owes its origin to the founding of the US constitution!</strong></p><p>While reading Dan Bouk&rsquo;s<a href="https://bookshop.org/p/books/democracy-s-data-the-hidden-stories-in-the-u-s-census-and-how-to-read-them-dan-bouk/18721705">Democracy&rsquo;s Data: The Hidden Stories in the U.S. Census and How to Read Them</a>, I came across this assertion and checked out his citation and then I went one step further to learn more. Here&rsquo;s the source of that assertion,<a href="https://www.academia.edu/281406/Governmentality_the_Grid_and_the_Beginnings_of_a_Critical_Spatial_History_of_the_Geo_Coded_World">a doctoral thesis entitled &ldquo;Governmentality, the Grid, and the Beginnings of a Critical Spatial History of the Geo-Coded World&rdquo;</a>:</p><blockquote><p>During the 1790s, the practice of placing odd and even numbers along
opposite sides of the street became common.<strong>The U.S. Marshal who conducted the
first federal census in Philadelphia (1790), Clement Biddle, renumbered the city&rsquo;s houses and separated odd and even numbers on different sides of the street.</strong> Interestingly, Biddle (1791) then published his own city directory based upon this new numbering scheme.</p></blockquote><p>(emphasis added). Shortly after Philadelphia did it, other cities (like New York City) followed suit.</p><p>The full thesis on &ldquo;house numbering&rdquo; is actually a super interesting read. He argues that &ldquo;as a spatial practice, house numbering is a comparatively recent innovation, which did not become widespread until the second half of the eighteenth century, and in some cities the practice was not systematically adopted until well into the nineteenth century.&rdquo;</p><p>So if numbering didn&rsquo;t start until the eighteenth century, why did house numbering start in the first place?</p><blockquote><p>The emergence of house numbering as a spatial practice cannot solely been attributed to a single factor. Rather, there were multiple forces at work. On the one hand, centralized military control is a key element in explaining the rise of house numbering in France (with the exception of Paris). Equally important was the use of house numbers for civil administration in urban centers. The practice of administratively affixing individuals to specific numerical addresses enabled the government to more easily tax its population, facilitated the conducting of a national census, and also served as the administrative basis for the commodification of space.</p></blockquote><p>TIL!</p>]]></content:encoded><category>til</category><enclosure url="https://vdavez.com/images/more_you_know.png" type="image/jpeg"/></item><item><title>The Second Founding</title><link>https://vdavez.com/2022/12/the-second-founding/</link><pubDate>Sun, 04 Dec 2022 06:12:09 -0600</pubDate><guid>https://vdavez.com/2022/12/the-second-founding/</guid><description>&lt;figure class="grid justify-center">&lt;a href="#ZgotmplZ">
&lt;img loading="lazy" src="#ZgotmplZ"/> &lt;/a>
&lt;/figure>
&lt;p>During the oral arguments in &lt;a href="">&lt;em>Students for Fair Admissions, Inc. v. Presidents and Fellows of Harvard College&lt;/em>&lt;/a>, the case where the Supreme Court is highly likely to end affirmative action in education as we know it, Chief Justice John Roberts stated &amp;ldquo;We did not fight a Civil War about oboe players. We did fight a Civil War to eliminate racial discrimination, and that&amp;rsquo;s why it&amp;rsquo;s a matter of &amp;ndash; of considerable concern.&amp;rdquo; Implied in this question is a historical claim the Fourteenth Amendment to the United States constitution was adopted to create a &amp;ldquo;color-blind constitution.&amp;rdquo;&lt;/p>
&lt;p>That argument of a color-blind constitution was more explicit by the attorney for those challenging Harvard&amp;rsquo;s policies:&lt;/p>
&lt;blockquote>
&lt;p>In terms of the original meaning of the Fourteenth Amendment, the best source on this I&amp;rsquo;ve ever read is the United States&amp;rsquo; brief on reargument in Brown. It painstakingly details the legislative history and how the framers of the Fourteenth Amendment saw it as a ban on all racial classifications.
Also, the &amp;ndash; everyone knows that the impetus for the Fourteenth Amendment was to constitutionalize the Civil Rights Act of 1866. The Civil Rights Act of 1866 is a series of bans on racial discrimination. It&amp;rsquo;s a series of color-blind measures and requirements.&lt;/p>&lt;/blockquote>
&lt;p>But are these points correct? Did we fight a Civil War to eliminate racial discrimination? Was the purpose of the Fourteenth Amendment to establish a &amp;ldquo;color-blind constitution&amp;rdquo;? Is it true that &amp;ldquo;everyone knows&amp;rdquo; that the Fourteenth Amendment was trying to constitutionalize the Civil Rights Act of 1866?&lt;/p>
&lt;p>History, as it turns out, is more complicated than the advocates suggest.&lt;/p></description><content:encoded>&lt;![CDATA[<figure class="grid justify-center"><a href="#ZgotmplZ"><img loading="lazy" src="#ZgotmplZ"/></a></figure><p>During the oral arguments in<a href=""><em>Students for Fair Admissions, Inc. v. Presidents and Fellows of Harvard College</em></a>, the case where the Supreme Court is highly likely to end affirmative action in education as we know it, Chief Justice John Roberts stated &ldquo;We did not fight a Civil War about oboe players. We did fight a Civil War to eliminate racial discrimination, and that&rsquo;s why it&rsquo;s a matter of &ndash; of considerable concern.&rdquo; Implied in this question is a historical claim the Fourteenth Amendment to the United States constitution was adopted to create a &ldquo;color-blind constitution.&rdquo;</p><p>That argument of a color-blind constitution was more explicit by the attorney for those challenging Harvard&rsquo;s policies:</p><blockquote><p>In terms of the original meaning of the Fourteenth Amendment, the best source on this I&rsquo;ve ever read is the United States&rsquo; brief on reargument in Brown. It painstakingly details the legislative history and how the framers of the Fourteenth Amendment saw it as a ban on all racial classifications.
Also, the &ndash; everyone knows that the impetus for the Fourteenth Amendment was to constitutionalize the Civil Rights Act of 1866. The Civil Rights Act of 1866 is a series of bans on racial discrimination. It&rsquo;s a series of color-blind measures and requirements.</p></blockquote><p>But are these points correct? Did we fight a Civil War to eliminate racial discrimination? Was the purpose of the Fourteenth Amendment to establish a &ldquo;color-blind constitution&rdquo;? Is it true that &ldquo;everyone knows&rdquo; that the Fourteenth Amendment was trying to constitutionalize the Civil Rights Act of 1866?</p><p>History, as it turns out, is more complicated than the advocates suggest.</p><hr><p>Indeed, when reading the<em>The Second Founding</em>, the latest book by Eric Foner, who is one of the leading historians of the Reconstruction Era, you quickly discover that the Thirteenth, Fourteenth, and Fifteenth amendments represent a deeply complex soup of goals. As he writes:</p><blockquote><p>For the historian, seeking to understand the purposes of the Reconstruction amendments is not the same as attempting to identify, as a matter of jurisprudence, the &ldquo;original intent&rdquo; of those who drafted and voted on them or the original meaning of the language used. Whether the courts should base decisions on &ldquo;originalism&rdquo; is a political, not a historical, question. But no historian believes that any important document possesses a single intent or meaning. Numerous motives inspired the constitutional amendments, including genuine idealism, the desire to secure permanently the North&rsquo;s victory in the Civil War, and partisan advantage.</p></blockquote><p>Foner&rsquo;s work reveals how the framers of the Reconstruction amendments contemplated radical departures from the constitutional framework that preceded it, and how those expectations were shattered over time. Again, as Foner writes:</p><blockquote><p>In the chapters that follow my purpose is not so much to identify the one &ldquo;true&rdquo; intent of the Reconstruction amendments, as to identify the range of ideas that contributed to the second founding; to explore the rapid evolution of thinking in which previously distinct categories of natural, civil, political, and social rights merged into a more diffuse, more modern idea of citizens&rsquo; rights that included most or all of them; and to suggest that more robust interpretations of the amendments are possible, as plausible, if not more so, in terms of the historical record, than how the Supreme Court has in fact construed them.</p></blockquote><p>The book covers a lot of ground in a relatively quick read. And, for me anyway, Foner accomplished his mission of helping illuminate the wide diversity of perspectives and goals in the framers of the Reconstruction amendments.</p><hr><p>For example, although modern readers of the Fourteenth Amendment predominantly focus on section one (or in these unusual times, section three), few focus on section two. That&rsquo;s not especially surprising given that the section has never been enforced.</p><p>At the time, however, what is now section two was the primary focus of the committee considering constitutional amendments. As Foner writes:</p><blockquote><p>The first version of a Fourteenth Amendment to emerge from the committee was an attempt to finesse the black suffrage issue while dealing with an ironic political consequence of the abolition of slavery. Now that all blacks were free, the Constitution&rsquo;s Three-Fifths Clause became inoperative. In the next reapportionment allocating membership in the House of Representatives and votes in the Electoral College, all blacks would be counted as part of each state&rsquo;s population. The southern states would thus enjoy added representation, giving them, as one congressman put it, &ldquo;an undue and unjust amount of political power in the government.&rdquo;</p></blockquote><p>An ironic political consequence, indeed, that the Republican project of abolishing slavery would favor southern Democrats in a future Congress and White House. But actually<em>solving</em> for that irony created other problems for Republican drafters:</p><blockquote><p>Seventeen proposals to restructure congressional representation came before the Joint Committee. The simplest way of dealing with this problem, Radicals insisted, was to require the states to enfranchise black men. This would ensure that the Slave Power would no longer control southern politics. Moderates, however, believed such an amendment would never secure ratification. Another option was to base representation on voters, not total population, as [Rep. Thaddeus] Stevens had proposed. This would leave suffrage requirements in the hands of the states. It would encourage the Johnson governments to enfranchise their black populations or leave those states with reduced power in Washington (a loss of one-third of their congressmen according to one estimate). But as Representative James G. Blaine of Maine pointed out early in January 1866, western migration was skewed toward men, and thus basing representation on voters would result in a shift of power away from eastern states, which had a higher percentage of women in their populations. The proposal, Blaine warned, might also unleash an &ldquo;unseemly scramble for voters,&rdquo; including the enfranchisement of women, which would double a state&rsquo;s representation in Congress.</p></blockquote><p>In other words (and unsurprising to the modern observer), members of Congress were keenly focused on future election cycles and the Fourteenth Amendment was (in part) an effort to maintain a structural advantage for the Republicans. And the compromise they settled on reflected the prejudices of the time, the idealism of reformers, and the realpolitik calculus of what could get enacted and ratified.</p><hr><p>In the end, as Foner notes, it is a political question about whether, legally, it<em>matters</em> what the authors of the Reconstruction Era thought they were doing when they engaged in a &ldquo;constitutional revolution&rdquo; by passing the Thirteenth, Fourteenth, and Fifteenth amendments.</p><p>From a purely historical perspective, though, his book reveals just how messy progress can look. In the moment, and even 150 years later.</p>]]></content:encoded><category>book review</category><category>law</category><enclosure url="https://upload.wikimedia.org/wikipedia/en/6/65/Second_founding_image.jpg" type="image/jpeg"/></item><item><title>Innovation in the Judiciary</title><link>https://vdavez.com/2022/11/innovation-in-the-judiciary/</link><pubDate>Wed, 30 Nov 2022 22:00:00 -0600</pubDate><guid>https://vdavez.com/2022/11/innovation-in-the-judiciary/</guid><description>&lt;p>Years ago, I &lt;a href="https://esq.io/2016/08/should-lawyers-learn-to-code/">argued&lt;/a> that lawyers should learn about writing software because doing so could improve the interactions between lawyers and technologists. As I wrote:&lt;/p>
&lt;blockquote>
&lt;p>Ultimately, lawyers and non-lawyers must learn to talk to each other. To collaborate with each other. To tackle society&amp;rsquo;s challenges together. And that requires empathy for each other&amp;rsquo;s domain expertise.&lt;/p>&lt;/blockquote>
&lt;p>Over the past six years, it is largely unsurprising that lawyers have not stepped up to the plate. More surprising, though, is that government institutions have increasingly created formal paths for technologists to answer the call.&lt;/p>
&lt;p>In my own experience, I have seen tech fellowship programs succeed in both &lt;a href="https://esq.io/2016/06/the-code-of-the-district-of-columbia-is-now-available-online/">local government&lt;/a>, and in both branches of &lt;a href="https://presidentialinnovationfellows.gov/">the&lt;/a> &lt;a href="https://digitalcorps.gsa.gov">federal&lt;/a> &lt;a href="https://www.techcongress.io/">government&lt;/a>. Which is why I was &lt;em>thrilled&lt;/em> to see the creation of the &lt;a href="https://www.law.georgetown.edu/tech-institute/programs/judicial-innovation/">Judicial Innovation Fellowship program&lt;/a> at Georgetown Law&amp;rsquo;s Tech Institute.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Years ago, I<a href="https://esq.io/2016/08/should-lawyers-learn-to-code/">argued</a> that lawyers should learn about writing software because doing so could improve the interactions between lawyers and technologists. As I wrote:</p><blockquote><p>Ultimately, lawyers and non-lawyers must learn to talk to each other. To collaborate with each other. To tackle society&rsquo;s challenges together. And that requires empathy for each other&rsquo;s domain expertise.</p></blockquote><p>Over the past six years, it is largely unsurprising that lawyers have not stepped up to the plate. More surprising, though, is that government institutions have increasingly created formal paths for technologists to answer the call.</p><p>In my own experience, I have seen tech fellowship programs succeed in both<a href="https://esq.io/2016/06/the-code-of-the-district-of-columbia-is-now-available-online/">local government</a>, and in both branches of<a href="https://presidentialinnovationfellows.gov/">the</a><a href="https://digitalcorps.gsa.gov">federal</a><a href="https://www.techcongress.io/">government</a>. Which is why I was<em>thrilled</em> to see the creation of the<a href="https://www.law.georgetown.edu/tech-institute/programs/judicial-innovation/">Judicial Innovation Fellowship program</a> at Georgetown Law&rsquo;s Tech Institute.</p><p>Readers of these pages know that trial courts at the state, local, territorial, and tribal (SLTT) levels are vital to the basic fabric of democracy in the United States. And, unfortunately, SLTT trial courts are typically structurally underfunded and rely on outmoded tools and technologies.</p><p>As the program explains:</p><blockquote><p>Partnering with state and tribal courts to build critical data infrastructure, simplify process, and improve usability of court services, this competitive fellowship is a unique opportunity to innovate a core democratic institution. More than just a job with a competitive salary and benefits, this fellowship is the flagship opportunity to change the way people access their rights and are served by courts.</p></blockquote><p>On first blush, these goals may sound lofty. It&rsquo;s just a one-year fellowship, right?</p><p>Over the years, though, a pattern with government tech fellowship programs has emerged. And I, for one, am hopeful to see it repeat itself here. That pattern? People join these fellowships expecting to stay for a year or two and then return to the private sector. But that expectation is quickly shattered when they realize the potential impact they can have and the level of professional satisfaction they can enjoy as public-servant technologists.</p><p>For example, over 50% of the Presidential Innovation Fellows end up staying in government roles after graduating from the program. By my read of the description, the PIF appears to be the closest analogue to the Judicial Innovation Fellowship program. If even a quarter of the technologists who join the courts as innovation fellows end up staying, our democracy will be better off.</p><p>I still believe that lawyers should put in effort to learn how technology works. In the mean time, though, we are fortunate that government and related institutions are stepping to create space for technologists to fill an access-to-justice gap.</p><p>If you work in a SLTT trial court, or are a technologist who cares about justice and democracy, I highly recommend you<a href="https://www.law.georgetown.edu/tech-institute/programs/judicial-innovation/">check out the program</a>.</p>]]></content:encoded><category>fellowships</category><category>innovation</category><enclosure url="https://www.law.georgetown.edu/tech-institute/wp-content/uploads/sites/42/2022/11/Judicial-Innovation-Fellowship-Logo-740x740.png" type="image/jpeg"/></item><item><title>Better RSS Feed Content</title><link>https://vdavez.com/2022/11/better-rss-feed-content/</link><pubDate>Wed, 30 Nov 2022 10:27:25 -0600</pubDate><guid>https://vdavez.com/2022/11/better-rss-feed-content/</guid><description>&lt;div class='grid justify-center '>&lt;iframe src='https://giphy.com/embed/3og0IMJcSI8p6hYQXS' width="480" height="355" frameBorder="0" class="giphy-embed" allowFullScreen>&lt;/iframe>&lt;p>&lt;a href='https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif'>via GIPHY&lt;/a>&lt;/p>&lt;/div>
&lt;p>I rely heavily on RSS feeds and I published my own content here using an &lt;a href="https://vdavez.com/posts/index.xml">RSS feed&lt;/a>. But, I noticed that my RSS reader wasn&amp;rsquo;t formatting my posts very well.&lt;/p>
&lt;p>TIL how to fix that using Hugo&amp;rsquo;s &lt;a href="https://gohugo.io/content-management/summaries/">content summaries&lt;/a>.&lt;/p>
&lt;p>Bottom line: mark up your content with a &amp;ldquo;&amp;lt;!--more--&amp;gt;&amp;rdquo; tag.&lt;/p>
&lt;h2>&lt;/h2></description><content:encoded>&lt;![CDATA[<div class='grid justify-center '><iframe src='https://giphy.com/embed/3og0IMJcSI8p6hYQXS' width="480" height="355" frameBorder="0" class="giphy-embed" allowFullScreen=/><p><a href='https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif'>via GIPHY</a></p></div><p>I rely heavily on RSS feeds and I published my own content here using an<a href="https://vdavez.com/posts/index.xml">RSS feed</a>. But, I noticed that my RSS reader wasn&rsquo;t formatting my posts very well.</p><p>TIL how to fix that using Hugo&rsquo;s<a href="https://gohugo.io/content-management/summaries/">content summaries</a>.</p><p>Bottom line: mark up your content with a &ldquo;&lt;!--more-->&rdquo; tag.</p><h2/><p>To illustrate the problem a bit, I have copied the RSS&rsquo;s feed XML from the paragraphs above to demonstrate what it looked like before and after I made the adjustment.</p><p>Before&hellip;</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;description></span></span></span><span class="line"><span class="cl">I rely heavily on RSS feeds and I published my own content here using an RSS feed. But, I noticed that my RSS reader wasn<span class="ni">&amp;rsquo;</span>t formatting my posts very well. TIL how to fix that using Hugo<span class="ni">&amp;rsquo;</span>s content summaries. Bottom line: mark up your content with a<span class="ni">&amp;ldquo;&amp;lt;</span>!--more--<span class="ni">&amp;gt;&amp;rdquo;</span> tag. via GIPHY<span class="nt">&lt;/description></span></span></span></code></pre></div><p>and after&hellip;</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;description></span></span></span><span class="line"><span class="cl"><span class="nt">&lt;p></span>I rely heavily on RSS feeds and I published my own content here using an<span class="nt">&lt;a</span><span class="na">href=</span><span class="s">"https://vdavez.com/posts/index.xml"</span><span class="nt">></span>RSS feed<span class="nt">&lt;/a></span>. But, I noticed that my RSS reader wasn<span class="ni">&amp;rsquo;</span>t formatting my posts very well.<span class="nt">&lt;/p></span><span class="nt">&lt;p></span>TIL how to fix that using Hugo<span class="ni">&amp;rsquo;</span>s<span class="nt">&lt;a</span><span class="na">href=</span><span class="s">"https://gohugo.io/content-management/summaries/"</span><span class="nt">></span>content summaries<span class="nt">&lt;/a></span>.<span class="nt">&lt;/p></span><span class="nt">&lt;p></span>Bottom line: mark up your content with a<span class="ni">&amp;ldquo;&amp;lt;</span>!--more--<span class="ni">&amp;gt;&amp;rdquo;</span> tag.<span class="nt">&lt;/p></span><span class="nt">&lt;div</span><span class="na">class=</span><span class="s">'grid justify-center '</span><span class="nt">>&lt;iframe</span><span class="na">src=</span><span class="s">'https://giphy.com/embed/3og0IMJcSI8p6hYQXS'</span><span class="na">width=</span><span class="s">"480"</span><span class="na">height=</span><span class="s">"355"</span><span class="na">frameBorder=</span><span class="s">"0"</span><span class="na">class=</span><span class="s">"giphy-embed"</span><span class="err">allowFullScreen</span><span class="nt">>&lt;/iframe>&lt;p>&lt;a</span><span class="na">href=</span><span class="s">'https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif'</span><span class="nt">></span>via GIPHY<span class="nt">&lt;/a>&lt;/p>&lt;/div>&lt;/description></span></span></span></code></pre></div><p>As you can see, in the first snippet, there are no links and no paragraph breaks. Everything is just plain text. But in the second snippet, it&rsquo;s complete html.</p><p>And all you need to do is add that &ldquo;&lt;!--more-->&rdquo; tag to get it done.</p><p>That&rsquo;s because the default<a href="https://github.com/gohugoio/hugo/blob/master/tpl/tplimpl/embedded/templates/_default/rss.xml#L35">Hugo RSS feed template</a> relies on the<code>.Summary</code> variable. And Hugo&rsquo;s default is to &ldquo;automatically take[] the first 70 words of your content as its summary and store[] it into the<code>.Summary</code> page variable for use in your templates.&rdquo;</p><p>But, the docs also say this:</p><blockquote><p>Alternatively, you may add the &ldquo;&lt;!--more-->&rdquo; summary divider where you want to split the article.
Content that comes before the summary divider will be used as that content’s summary and stored in the<code>.Summary</code> page variable with all HTML formatting intact.</p></blockquote><p>That&rsquo;s right! If you drop in that summary divider, all of the html is preserved.</p><p>Which means that, out of the box, you get prettier RSS.</p>]]></content:encoded><category>til</category><category>blogging</category><enclosure url="https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" type="image/jpeg"/></item><item><title>Hugo Categories and Tags</title><link>https://vdavez.com/2022/11/hugo-categories-and-tags/</link><pubDate>Tue, 29 Nov 2022 12:08:17 -0600</pubDate><guid>https://vdavez.com/2022/11/hugo-categories-and-tags/</guid><description>&lt;p>Most content-management systems allow for some kind of content tagging. Until recently, though, I was creating &amp;ldquo;TILs&amp;rdquo; as a separate content type. This was fine, I guess, but it was sort of clunky too in its own way.&lt;/p>
&lt;p>But, then I discovered hugo&amp;rsquo;s &lt;a href="https://gohugo.io/content-management/taxonomies/">taxonomies&lt;/a>. tl;dr — hugo ships &amp;ldquo;tags&amp;rdquo; and &amp;ldquo;categories&amp;rdquo; out of the box for posts.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Most content-management systems allow for some kind of content tagging. Until recently, though, I was creating &ldquo;TILs&rdquo; as a separate content type. This was fine, I guess, but it was sort of clunky too in its own way.</p><p>But, then I discovered hugo&rsquo;s<a href="https://gohugo.io/content-management/taxonomies/">taxonomies</a>. tl;dr — hugo ships &ldquo;tags&rdquo; and &ldquo;categories&rdquo; out of the box for posts.</p><p>So, I moved all of my TILs into the<code>posts</code> folder, added<code>categories = ["til"]</code> to the front matter and &#x1f4a5; I have the ability to just have my TILs in my blog posts.</p><p>Two observations:</p><ol><li>I used categories for TILs instead of tags because then I can tag the TILs (like I have here: &ldquo;blogging&rdquo;) and can group TILs (or posts!) based on those tags.</li><li>I also had to add a<code>slug-title</code> to my front matter for the TILs because that&rsquo;s how I have posts configured. But that was easy enough.</li></ol><p>I&rsquo;m pretty pleased with the new, improved setup!</p>]]></content:encoded><category>til</category><category>blogging</category><enclosure url="https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" type="image/jpeg"/></item><item><title>Twitter timeline with embedded replies</title><link>https://vdavez.com/2022/11/twitter-timeline-with-embedded-replies/</link><pubDate>Thu, 17 Nov 2022 00:00:00 -0600</pubDate><guid>https://vdavez.com/2022/11/twitter-timeline-with-embedded-replies/</guid><description>&lt;p>Today, one of my colleagues reached out because the &lt;a href="https://help.twitter.com/en/using-twitter/embed-twitter-feed">twitter embed&lt;/a> on their website only showed tweets, not replies. I helped build the website, so I offered to figure out how to fix that.&lt;/p>
&lt;p>To use a twitter embed, you simply include an &amp;lt;a&amp;gt; tag with the class &amp;ldquo;twitter-timeline&amp;rdquo; and import a bit of javascript. That javascript will then replace the tag with all of the necessary tweet data from the twitter API and apply some styles. It&amp;rsquo;s pretty slick.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Today, one of my colleagues reached out because the<a href="https://help.twitter.com/en/using-twitter/embed-twitter-feed">twitter embed</a> on their website only showed tweets, not replies. I helped build the website, so I offered to figure out how to fix that.</p><p>To use a twitter embed, you simply include an &lt;a> tag with the class &ldquo;twitter-timeline&rdquo; and import a bit of javascript. That javascript will then replace the tag with all of the necessary tweet data from the twitter API and apply some styles. It&rsquo;s pretty slick.</p><p>But, by default, the javascript doesn&rsquo;t allow for replies. So I dug a bit into<a href="https://developer.twitter.com/en/docs/twitter-for-websites/timelines/guides/parameter-reference">the documentation</a>.</p><p>There, I learned that you can set a &ldquo;data-* attribute&rdquo; to enable replies. It&rsquo;s pretty simple:</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-html" data-lang="html"><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">a</span><span class="na">class</span><span class="o">=</span><span class="s">"twitter-timeline"</span><span class="na">data-show-replies</span><span class="o">=</span><span class="s">"true"</span><span class="na">href</span><span class="o">=</span><span class="s">"https://twitter.com/${USERNAME}"</span><span class="p">></span></span></span><span class="line"><span class="cl"> Tweets by ${username}</span></span><span class="line hl"><span class="cl"><span class="p">&lt;/</span><span class="nt">a</span><span class="p">></span></span></span><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">script</span><span class="na">async</span><span class="na">charset</span><span class="o">=</span><span class="s">"utf-8"</span><span class="na">src</span><span class="o">=</span><span class="s">"https://platform.twitter.com/widgets.js"</span><span class="p">>&lt;/</span><span class="nt">script</span><span class="p">></span></span></span></code></pre></div><p>Just like that, with the addition of that highlighted line, your twitter replies show up in your timeline!</p><h2 id="bonus-do-not-track-parameter">Bonus: &ldquo;Do Not Track&rdquo; parameter</h2><p>As I perused the documentation, I noticed another parameter called<code>dnt</code>, which is short for &ldquo;Do Not Track&rdquo;:</p><blockquote><p>When set to true, the timeline and its embedded page on your site are not used for purposes that include personalized suggestions and personalized ads.</p></blockquote><p>I could spill ink here about default rules, ethics, and user privacy, but instead I&rsquo;ll simply note:<code>dnt</code> defaults to &ldquo;false,&rdquo; which means that, you are enabling twitter to track people&rsquo;s usage on your site by default. So, I flipped that setting the data attribute to &ldquo;true.&rdquo;</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-html" data-lang="html"><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">a</span></span></span><span class="line"><span class="cl"><span class="na">class</span><span class="o">=</span><span class="s">"twitter-timeline"</span></span></span><span class="line hl"><span class="cl"><span class="na">data-show-replies</span><span class="o">=</span><span class="s">"true"</span></span></span><span class="line hl"><span class="cl"><span class="na">data-dnt</span><span class="o">=</span><span class="s">"true"</span></span></span><span class="line"><span class="cl"><span class="na">href</span><span class="o">=</span><span class="s">"https://twitter.com/${USERNAME}"</span></span></span><span class="line"><span class="cl"><span class="p">></span></span></span><span class="line"><span class="cl"> Tweets by ${username}</span></span><span class="line"><span class="cl"><span class="p">&lt;/</span><span class="nt">a</span><span class="p">></span></span></span><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">script</span><span class="na">async</span><span class="na">charset</span><span class="o">=</span><span class="s">"utf-8"</span><span class="na">src</span><span class="o">=</span><span class="s">"https://platform.twitter.com/widgets.js"</span><span class="p">>&lt;/</span><span class="nt">script</span><span class="p">></span></span></span></code></pre></div>]]></content:encoded><category>til</category><enclosure url="https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" type="image/jpeg"/></item><item><title>Acer Access and Development Program</title><link>https://vdavez.com/2022/05/acer-access-and-development-program/</link><pubDate>Thu, 12 May 2022 00:00:00 -0500</pubDate><guid>https://vdavez.com/2022/05/acer-access-and-development-program/</guid><description>&lt;p>While playing around with yet another random data set, I came across the Acer Access and Development Program. Here&amp;rsquo;s the description on the &lt;a href="https://www.ams.usda.gov/services/grants/acer">website&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>The Acer Access and Development Program (Acer) offers grants to support the efforts of States, tribal governments, and research institutions to promote the domestic maple syrup industry. Supported activities include: promotion of research and education related to maple syrup production; promotion of natural resource sustainability in the maple syrup industry; market promotion for maple syrup and maple-sap products; encouragement of owners and operators of privately held land containing species of trees in the genus Acer to initiate or expand maple-sugaring activities on the land; or to voluntarily make the land available, including by lease or other means, for access by the public for maple-sugaring activities.&lt;/p></description><content:encoded>&lt;![CDATA[<p>While playing around with yet another random data set, I came across the Acer Access and Development Program. Here&rsquo;s the description on the<a href="https://www.ams.usda.gov/services/grants/acer">website</a>:</p><blockquote><p>The Acer Access and Development Program (Acer) offers grants to support the efforts of States, tribal governments, and research institutions to promote the domestic maple syrup industry. Supported activities include: promotion of research and education related to maple syrup production; promotion of natural resource sustainability in the maple syrup industry; market promotion for maple syrup and maple-sap products; encouragement of owners and operators of privately held land containing species of trees in the genus Acer to initiate or expand maple-sugaring activities on the land; or to voluntarily make the land available, including by lease or other means, for access by the public for maple-sugaring activities.</p></blockquote><p>So, today, I really learned a few things:</p><ol><li>This program exists; and</li><li>The genus for maple trees is &ldquo;Acer&rdquo;</li><li>In Fiscal Year 2021, the Acer Program made $5,428,208.66 in<a href="https://www.ams.usda.gov/sites/default/files/media/AcerFY21DescriptionofFundedProjects.pdf">grant awards</a>.</li><li>According to<a href="https://en.wikipedia.org/wiki/Maple_syrup">Wikipedia</a>, Quebec accounts for 70% of the world&rsquo;s maple syrup production.</li><li>According to the National Agricultural Statistics Service, the<a href="https://downloads.usda.library.cornell.edu/usda-esmis/files/tm70mv177/w3764d11b/6q183r18c/crop0522.pdf">United States produced</a> 3.424 million gallons of syrup, which means there was almost 140 million gallons of sap collected! That&rsquo;s a lot of syrup!</li><li>According to<a href="https://www.grandviewresearch.com/industry-analysis/maple-syrup-market-report">one report</a>, the global maple syrup market size was valued at $1.49 billion in 2021!</li></ol>
]]></content:encoded><category>til</category><enclosure url="https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" type="image/jpeg"/></item><item><title>Trimming local git branches</title><link>https://vdavez.com/2021/07/trimming-local-git-branches/</link><pubDate>Sat, 03 Jul 2021 00:00:00 -0500</pubDate><guid>https://vdavez.com/2021/07/trimming-local-git-branches/</guid><description>&lt;p>Every now and then I remember to clean local, and &lt;em>merged&lt;/em> git branches, and I always forget how to do it. So, TIL how to think about it, thanks to &lt;a href="https://www.hacksparrow.com/git/delete-all-branches-except-master.html">this post&lt;/a>.&lt;/p>
&lt;ol>
&lt;li>Get on the &lt;code>main&lt;/code> branch.&lt;/li>
&lt;li>Run a script using the following procedure:
&lt;ol>
&lt;li>List all of the branch names&lt;/li>
&lt;li>Look for the asterisk (*) and &lt;em>ignore it&lt;/em> (make sure it&amp;rsquo;s verbose)&lt;/li>
&lt;li>Pass all of the other branches and run the git branch deletion command&lt;/li>
&lt;/ol>
&lt;/li>
&lt;/ol>
&lt;p>Here&amp;rsquo;s how it looks.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Every now and then I remember to clean local, and<em>merged</em> git branches, and I always forget how to do it. So, TIL how to think about it, thanks to<a href="https://www.hacksparrow.com/git/delete-all-branches-except-master.html">this post</a>.</p><ol><li>Get on the<code>main</code> branch.</li><li>Run a script using the following procedure:<ol><li>List all of the branch names</li><li>Look for the asterisk (*) and<em>ignore it</em> (make sure it&rsquo;s verbose)</li><li>Pass all of the other branches and run the git branch deletion command</li></ol></li></ol><p>Here&rsquo;s how it looks.</p><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">git checkout main<span class="o">&amp;&amp;</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/> git branch<span class="p">|</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/> grep -v<span class="s1">'^*'</span><span class="p">|</span><span class="se">\</span></span></span><span class="line"><span class="cl"><span class="se"/> xargs git branch -d</span></span></code></pre></div><p>I&rsquo;ve created and alias in<code>.zshrc</code> for this command as<code>git-trim</code>. Enjoy, future Dave!</p>
]]></content:encoded><category>til</category><enclosure url="https://media.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" type="image/jpeg"/></item><item><title>On being uncomfortable with writer's block</title><link>https://vdavez.com/2021/01/on-being-uncomfortable-with-writers-block/</link><pubDate>Mon, 11 Jan 2021 00:00:00 -0600</pubDate><guid>https://vdavez.com/2021/01/on-being-uncomfortable-with-writers-block/</guid><description>&lt;p>Today, I have writer&amp;rsquo;s block. And so, I am just writing.&lt;/p>
&lt;p>According to &lt;a href="https://seths.blog/2020/06/the-simple-cure-for-writers-block/">Seth Godin&lt;/a>, the act of writing is the cure to writer&amp;rsquo;s block. As he puts it:&lt;/p>
&lt;blockquote>
&lt;p>The best way to address this isn&amp;rsquo;t to wait to be perfect. Because if you wait, you&amp;rsquo;ll never get there.
The best way to deal with it is to write, and to realize that your bad writing isn’t fatal.&lt;/p>&lt;/blockquote>
&lt;p>We like to imagine that good writing is that is the result of inspiration and natural talent. Godin&amp;rsquo;s advice suggests that good writing is the result of &lt;em>practice&lt;/em>.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Today, I have writer&rsquo;s block. And so, I am just writing.</p><p>According to<a href="https://seths.blog/2020/06/the-simple-cure-for-writers-block/">Seth Godin</a>, the act of writing is the cure to writer&rsquo;s block. As he puts it:</p><blockquote><p>The best way to address this isn&rsquo;t to wait to be perfect. Because if you wait, you&rsquo;ll never get there.
The best way to deal with it is to write, and to realize that your bad writing isn’t fatal.</p></blockquote><p>We like to imagine that good writing is that is the result of inspiration and natural talent. Godin&rsquo;s advice suggests that good writing is the result of<em>practice</em>.</p><p>It&rsquo;s helpful to think about writing as the cure for writer&rsquo;s block as we think about other &ldquo;blockers&rdquo; in our life. How many mental blocks are the result of fear of perfection? How many times do we decline to take action because we aren&rsquo;t confident in our own voice or abilities?</p><p>In that way, the advice to<em>lean in</em> to those fears and recognize that blockers are a sign that you simply need practice. You<em>can</em>, but only if you<em>do</em>.</p>
]]></content:encoded><enclosure url="https://source.unsplash.com/T1L9Q5g7eIQ/300x300" type="image/jpeg"/></item><item><title>The etymology of mission</title><link>https://vdavez.com/2021/01/the-etymology-of-mission/</link><pubDate>Sun, 10 Jan 2021 00:00:00 -0600</pubDate><guid>https://vdavez.com/2021/01/the-etymology-of-mission/</guid><description>&lt;p>Business literature abounds with doctrine on the importance of &amp;ldquo;mission statements.&amp;rdquo; A &amp;ldquo;mission,&amp;rdquo; according to Merriam Webster is &amp;ldquo;a specific task with which a person or a group is charged.&amp;rdquo; As part of that sense, mission is defined as a &amp;ldquo;a preestablished and often self-imposed objective or purpose.&amp;rdquo; All well and good.&lt;/p>
&lt;p>There is an obsolete meaning for a &lt;em>mission&lt;/em>, though: &amp;ldquo;the act or an instance of sending.&amp;rdquo; According to the &lt;a href="https://www.etymonline.com/word/mission">Online Etymology Dictionary&lt;/a>, this obsolete definition of mission came first. The common definition of mission —- a destiny or destination &amp;ndash; did not arise until nearly a century later in the late 17th century.&lt;/p></description><content:encoded>&lt;![CDATA[<p>Business literature abounds with doctrine on the importance of &ldquo;mission statements.&rdquo; A &ldquo;mission,&rdquo; according to Merriam Webster is &ldquo;a specific task with which a person or a group is charged.&rdquo; As part of that sense, mission is defined as a &ldquo;a preestablished and often self-imposed objective or purpose.&rdquo; All well and good.</p><p>There is an obsolete meaning for a<em>mission</em>, though: &ldquo;the act or an instance of sending.&rdquo; According to the<a href="https://www.etymonline.com/word/mission">Online Etymology Dictionary</a>, this obsolete definition of mission came first. The common definition of mission —- a destiny or destination &ndash; did not arise until nearly a century later in the late 17th century.</p><p>It&rsquo;s interesting to think about how that evolution occurred. Although I can&rsquo;t find any scholarly analysis of the word, here&rsquo;s what I have pieced together. The origin of the word &ldquo;mission&rdquo; comes from the late 16th century and 17th century, during the height of the Counter Reformation, and specifically the rise of the Jesuit missions. In the initial sense of the word, mission was literal: sending Jesuits across the world to spread the Catholic faith. As the Counter Reformation gained momentum, though, the concept of mission shifted: instead of being a &ldquo;push&rdquo; of Jesuits out into the world, the word became to signify a sort of &ldquo;pull&rdquo; to proselytize. In other words, the very concept of mission was transformed from describing how the Jesuits carried out their work (i.e., traveling across the world) to a statement of their goals (i.e., converting populations to Catholicism).</p><p>To a modern sensibility, it seems discordant to imagine the act of defining an organization&rsquo;s &ldquo;mission&rdquo; as a descendant of religion generally, and colonialism specifically. Nevertheless, it&rsquo;s interesting to consider how the word&rsquo;s definition evolved&ndash;from<em>how</em> to<em>why</em>&ndash;and to contemplate how modern leaders can understand and give meaning to their own work.</p>
]]></content:encoded><enclosure url="https://source.unsplash.com/0GY8PGd17qw/300x300" type="image/jpeg"/></item></channel></rss>