Pete's Points2018-11-25T18:53:13.915Zhttps://peterlyons.com/problog/feedPeter LyonsFuzzball Desktop Automation2018-11-25T18:53:13.915Z//peterlyons.com/problog/2018/11/fuzzball-desktop-automation<p>Here&#39;s a screencast of my fuzzball desktop automation system. The <a href="https://gist.github.com/focusaurus/506fff3d849bd167c5c809f2f12815e1">accompanying gist is here</a>.</p> <iframe width="560" height="315" src="https://www.youtube.com/embed/nI0jIxzc_YQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> tealeaves gets rust and docker updates2018-09-17T01:00:51.005Z//peterlyons.com/problog/2018/09/tealeaves-gets-rust-and-docker-updates<p>So I&#39;ve been trying to do as much of my local development from within docker containers. Each project I tweak things to be a little bit nicer than the last one. Most recently I went to update my tealeaves ssh key parser utility (which is coded in rust). Initially I was getting all manner of frustrating errors trying to get a reproducible version of rust installed with 1.30, clippy, and rustfmt. I bailed one night in frustration only to later learn that I had left my <code>Cargo.toml</code> modified from an attempt to use a local clone of one of my dependencies, and I had never cloned that repo on my new laptop, so rustc couldn&#39;t find the files it needed. Once I fixed that silly mistake things mostly started to make more sense. I was able to get a configuration where:</p> <ul> <li>I can do all my local development in the docker container</li> <li>Filesystem user is the same in the container and the host, no annoying <code>permission denied</code> errors with root-owned files</li> <li>Cargo properly caches stuff so every time you stop and start the container you don&#39;t have to redownload and rebuild the universe.</li> </ul> <p>So after I had my dockerized development setup working, I went to continue my long-postponed effort to upgrade to the nom v4 crate. I was delighted to discover that in the lengthy interim, all my direct dependencies had already upgraded to nom v4. So I spent a while chasing compilers errors and making the necessary code adjustments. I almost gave up a few times, but eventually the thing compiled! I almost didn&#39;t believe the terminal. But it compiles now and all the unit tests still pass and it seems to work, so I&#39;m pleased with that.</p> Securing local development with containers2018-07-13T08:36:01.017Z//peterlyons.com/problog/2018/07/securing-local-development-with-containers<p>Starting this Spring when I changed OSes from mac to linux, I decided to experiment with using docker containers to isolate dangerous development tools from my local system. Given the npm malware attack yesterday, this seems like a good time to write up my results so far.</p> <h2 id="motivation">Motivation</h2> <p>OK I&#39;ll try not to get overly long-winded here, but let me just state broadly that I think the core fundamental idea in linux that you log in to your system with an effective userid, that userid has read/write access to your entire home directory (typically), and when you execute a program, it runs as your userid and therefore generally has read/write access to all of your files is utterly and fundamentally misguided and inappropriate. I have some designs that are basically the opposite of this, but I don&#39;t want to digress into that. Pragmatically, I wanted to find some way to mitigate this geologically-huge security vulnerability using existing tools and not going full-on Stallman.</p> <p>To just clarify the specific vulnerability here, I&#39;m talking about running commands like <code>npm install</code> and having that download code from the Internet, some of which is malicious, then executing that malicious code and having it do any one of the following nasty things:</p> <ul> <li>Delete a bunch of your files, either maliciously or due to a bug</li> <li>Read a bunch of your private files such a ssh private keys and upload them to an attacker-controlled server</li> <li>Make some subtle and hard-to-detect alteration to some key file</li> </ul> <h2 id="container-isolation-basic-approach">Container Isolation Basic Approach</h2> <p>So when I had a clean slate I decided to try to mitigate this risk with the following basic tactic:</p> <ul> <li>Each project gets a docker container with a basic shell, node/npm, and maybe a few other development tools as needed</li> <li>npm and node never get executed directly on the host OS, only within the container</li> <li>The container only gets a filesystem volume mounted with a specific project working directory. It has no access to my home directory, any dotfiles, or any sibling project directories</li> </ul> <h2 id="setup-script">Setup script</h2> <p>The pattern is similar for most projects, but varies a little bit depending on the tech stack I&#39;m working with (these days mostly node.js or rust), and the specific needs of the project in terms of tools, environment variables, network ports, etc.</p> <p>Here&#39;s a representative sample for a node project. I typically check a file in as <code>bin/docker-run.sh</code> to fire up that project&#39;s docker container.</p> <pre><code class="language-bash">#!/usr/bin/env bash # Please Use Google Shell Style: https://google.github.io/styleguide/shell.xml # ---- Start unofficial bash strict mode boilerplate # http://redsymbol.net/articles/unofficial-bash-strict-mode/ set -o errexit # always exit on error set -o errtrace # trap errors in functions as well set -o pipefail # don&#39;t ignore exit codes when piping output set -o posix # more strict failures in subshells # set -x # enable debugging IFS=&quot;$(printf &quot;\n\t&quot;)&quot; # ---- End unofficial bash strict mode boilerplate cd &quot;$(dirname &quot;$0&quot;)/..&quot; exec docker run --rm --interactive --tty \ --volume &quot;${PWD}:/opt&quot; \ --workdir /opt \ --env USER=root \ --env PATH=/usr/sbin:/usr/bin:/sbin:/bin:/opt/node_modules/.bin \ &quot;node:$(cat .nvmrc)&quot; &quot;${1-/bin/bash}&quot;</code></pre> <p>Here&#39;s some detail on what this does.</p> <ul> <li><code>exec docker run</code> runs the docker container. The <code>exec</code> just replaces the shell with the docker process instead of having the shell process stay around waiting.</li> <li><code>--rm</code> delete the container right away instead of leaving useless cruft around gradually filling your filesystem</li> <li><code>--interactive --tty</code> set this up for an interactive terminal session</li> <li><code>--volume</code> exposes the project&#39;s files to the container</li> <li><code>--workdir</code> puts your shell in the project root right away</li> <li><code>--env</code> sets environment variables. You may need to set things like <code>HOME</code> or <code>USER</code>, maybe not. For node, adding <code>/opt/node_modules/.bin</code> to your <code>PATH</code> can be handy so you can avoid the silly <code>npm install -g</code>.</li> <li>For node, I get the desired node version from my <code>.nvmrc</code> file in the project root</li> <li><code>--publish 9229:9229</code> is handy to enable devtools debugging to work</li> <li><code>--publish 3000:3000</code> is what you need for a node server project that listens on port 3000</li> <li>The <code>${1-/bin/bash}</code> means when no arguments are passed, run bash, but if an argument is passed, run that instead. Generally I don&#39;t need that but I can do <code>./bin/docker-run.sh node server.js</code> for example if I know I want to run the server.</li> </ul> <h2 id="how-well-does-it-work-">How well does it work?</h2> <p>So far all the basic stuff is working OK. Running npm works, running node works, debugging works OK, running an http server works. Terminal colors work. Arrow keys work. Bash history searching works (at least for a given shell session).</p> <p>One gripe I have, which I could remedy I just haven&#39;t gotten around to it yet is in the container I have a vanilla bash configuration without my normal toolbox of zsh and dozens of aliases, functions, settings, etc. Usually I&#39;m only running 3 or 4 commands in the container in a tight loop, and arrow keys and history searching work fine, so it&#39;s OK. However, bash history of commands in the container does not persist, so if I come up with a useful long command line, I need to take special action to capture it in a script or my notes.</p> <h2 id="further-isolation">Further isolation</h2> <p>This is where I am at the moment, but of course as with all security efforts, there&#39;s an endless list of additional measures that could be taken. Here&#39;s the next few things I plan to look at.</p> <ul> <li>Use a non-root user in the container</li> <li>Get stricter with docker capability limitations</li> <li>Maybe run git in the container</li> </ul> <p>Right now I&#39;m only running npm and <code>node my-project.js</code> within the container (or cargo for a rust project). I trust git a lot more than I do npm, but I think with git hooks ultimately the same vulnerability exists with git, so I&#39;d like to run that in the container. However, there&#39;s a few kinks to work out in terms of filesystem userids, ssh agent access for pull/push, etc.</p> <p>I hope you found this interesting and useful. Stay safe out there!</p> The Art of the node.js Rescue2018-07-11T04:53:53.422Z//peterlyons.com/problog/2018/07/the-art-of-the-node-js-rescue<p>I&#39;ve recently been helping a client get their node.js mobile back end API server ready to launch a new service. In this post I&#39;ll outline the guidelines I use when I&#39;m brought in to true-up a problematic node.js codebase.</p> <h2 id="first-triage-and-repair-server-basics">First, triage and repair server basics</h2> <p>Before any real rescue efforts can happen, I need to work through the fundamental issues and get up to a bare minimum of a working project and service.</p> <p><strong>Is the code even in a git repository?</strong> I&#39;ve only encountered the &quot;here&#39;s the zip file the previous agency delivered&quot; once, but step one is get the existing codebase without any new modifications into an initial git commit and get it pushed to a git host.</p> <p><strong>Does it have documentation?</strong> Can a new developer get from <code>git clone</code> to a running server using just the README (no slack allowed!)? If not, I have to reverse engineer that and document it in the README as I figure it out. As I come to understand the basic software stack, third party service integrations, deployment setup, etc, all that gets documented.</p> <p><strong>Are the dependencies accurately tracked?</strong> Usually they are but if not I fix <code>package.json</code> and generate a new <code>packge-lock.json</code> as necessary.</p> <p><strong>Does the server start and run?</strong> If not, I need to get to that milestone. Even though this sounds like a basic thing any project would have starting at minute 4 of its lifetime, in my experience many node projects get carried away with bullshit fancy code to handle configuration, clustering, process supervision, fancy logging setups, etc, and often that code is misguided or just flat out broken.</p> <p>OK if the server will start and listen for HTTP requests, I can switch into my normal course of treatment to get it functioning well.</p> <h2 id="automated-unit-tests-are-the-foundation">Automated unit tests are the foundation</h2> <p>The next phase is setting up a good automated unit test stack. These days I reach for <code>tap</code>, <code>supertest</code>, and <code>nock</code> as my first choice libraries. Here are the important points to achieve:</p> <ul> <li>Unit tests should run locally</li> <li>Tests should be fast and deterministic</li> <li>Running partial sets of tests at any granularity should be straightforward</li> <li>Tests should be runnable under the devtools debugger</li> </ul> <p>The main bit of work is finding the right set of libraries, helper functions, and setup/teardown code that make sense for the service. I usually test the HTTP interface via <code>supertest</code> because the code to call an API endpoint via supertest is concise enough to be basically in the same category as just calling a function with arguments. Since the tests are coded against the same stable API interface that the web or mobile front ends use, this is a stable integration point and I can typically overhaul or rewrite an endpoint implementation without changing its HTTP interface. Usually API endpoints are not that much code, but if you do have a really complex endpoint, go ahead and write unit tests for the various internal implementation functions.</p> <p>Once the tests are working locally, I&#39;ll set up continuous integration so they work on pull requests and are integrated into github.</p> <h3 id="why-automated-unit-tests-">Why automated unit tests?</h3> <p>If I put on my cynical hat for a moment, I could sum up the bulk of my consultancy as &quot;I come in after non-unit-testing developers and get their code working by writing tests&quot;. Yes, that&#39;s a cynical characterization but there&#39;s a kernel of painful truth there.</p> <p>JavaScript as a language has near-zero support for writing correct programs. It allows and encourages us to write code that does not get even the most basic analysis for correctness. On a typical low-quality node.js server codebase, about the only guarantee likely to actually be upheld is that every file in the require dependency graph is syntactically valid javascript, and absolutely nothing beyond that is guaranteed. ReferenceErrors and TypeErrors are almost guaranteed to exist in large quantity. There&#39;s a plague of code out there crashing in production that was clearly never run: not on the developer&#39;s laptop, not in CI, no one tested it in QA. First execution is on the production server crashing when triggered by an end user.</p> <p>Putting on my less-cynical hat, I mostly still believe a well-tested node.js codebase is something you can reasonably deliver to a client, and you can point to some pragmatic realities it offers:</p> <ul> <li>Large set of developers able to work with it</li> <li>Enormous ecosystem of available libraries</li> <li>Good to great speed of developing features</li> <li>Good to great performance at runtime</li> <li>Excellent tooling throughout the software development lifecycle</li> </ul> <p>However, these <strong>only</strong> hold true if you have solid test coverage. Untested javascript is such a massive liability and a terrible-odds gamble that I think we as a community working with this technology stack need to take a hard and clear stance and make the following statement:</p> <p><strong>Untested javascript is not a viable deliverable for professional software development.</strong> Viable professional javascript <strong>must</strong> be delivered with extensive tests.</p> <p>Untested javascript is just incredibly likely to be rife with bugs and comes with enormous cost and risk to any refactoring. As agencies, consultants, and employees we need to stop delivering it. Clients need to be educated to insist on a working automated test suite running in a continuous integration system as a baseline deliverable. I would say this is analogous to a plumber leaving a job without ever having put running water through the system.</p> <h2 id="establishing-patterns-and-antipatterns">Establishing Patterns and Antipatterns</h2> <p>A node server codebase lends itself to boilerplatey patterns repeated across a lot of endpoints. I generally set up a dedicated set of example routes to establish the new, correct code patterns with good examples. These of course have full unit test coverage and the idea is to have clean patterns for input validation, control flow, error handling, logging, etc.</p> <p>There are also usually repeated antipatterns. Of course, a well-applied middleware could potentially eliminate a whole class of boilerplate, so that&#39;s the ideal target, but often times I find little micro-antipatterns in how the DB is queried or promises are used, etc. I code up examples that illustrate how these are broken and point to the corrected patterns that should be applied when making changes to particular endpoints.</p> <h2 id="tracking-and-fixing-bugs">Tracking and fixing bugs</h2> <p>Once the unit testing stack is solid, I begin the main phase of the real work here which is going through the API endpoints and identifying where the bugs and issues are. The unit tests are the guide here and the work should be prioritized using whatever information is available. Focus on the high-importance or high-frequency API calls first and leave the ancillary and supporting calls until later.</p> <p>The key tools for this include basic logging, a bug tracking tool, and optionally an error tracking service such as Sentry. The process loop I will repeat many times in this phase will look more or less as follows:</p> <ul> <li>Identify a potential bug via an error in the log file, a server crash with stack trace, or a specific API response that is known to be incorrect</li> <li>File a bug for the issue in the bug tracker with the relevant details so it can be understood and reproduced</li> <li>Reproduce the failure in a unit test. Be sure to do this before making any changes to the relevant application code.</li> <li>Once reproduced in a failing test, code a fix for the issue</li> <li>Guide the fix through delivery and mark as resolved</li> </ul> <h2 id="resist-the-temptation-to-rewrite-and-overhaul">Resist the temptation to rewrite and overhaul</h2> <p>When faced with a nasty codebase, one may feel discomfort, frustration, and anxiety about the state of the code. These feelings can make the following things really tempting:</p> <ul> <li>Start a new codebase entirely</li> <li>Bulk update all the dependencies to latest</li> <li>Update to the latest node.js and npm</li> <li>Do some drastic modification across the entire codebase</li> </ul> <p>My advice here is to resist this temptation. All of these activities I think serve the developer&#39;s emotions at the expense of value to the client.</p> <p>A rewrite discards any latent value in the codebase and forces the client to pay again from scratch for development of the functionality of the server. In extreme cases, this can be the only reasonable way forward, but 98% of the time the codebase can be salvaged. If you make a recommendation to your client to rewrite, make sure you have a compelling cost/benefit analysis. And on the other side, be aware of how the sunk cost fallacy can factor in your decision to continue with an existing codebase. Of course, if you do need to make a recommendation to the client to embark on a rewrite, prepare a thorough case study of how specifically the first attempt failed and how each of those failures will be specifically avoided in the rewrite.</p> <p>The goal of this type of rescue is to make the software stable and reliable. Just bulk updating all the dependencies is almost certainly going introduce novel bugs and work counter to that goal. It&#39;s fine to update specific and particular dependencies when you have a concrete reason to do so, but just updating things in bulk for general &quot;hygiene&quot; is not appropriate in this situation, and without solid unit tests, you have no indication what has continued to work, continued to be broken, been fixed, or been broken in a novel way.</p> <p>The same logic applies to updating the node.js version. Until you have a substantial set of unit tests, it&#39;s totally in conflict with the project goals to do this.</p> <p>As you add unit tests and increase code coverage, at some point it becomes safer to make broad updates and changes. Exactly where that point is at is a judgement call, but I recommend being conservative here, erring on the side of more tests.</p> <h2 id="code-autoformatting-tools">Code autoformatting tools</h2> <p>I&#39;m a huge fan of autoformatting tools (<code>prettier</code>, <code>eslint --fix</code>, etc) and they&#39;ve become pretty core to how I work. I don&#39;t worry about formatting issues when I&#39;m actually typing code, I just hit <code>cmd+f</code> and trust it to make the code pretty.</p> <p>However, on an existing codebase that has not been using an autoformatter, I recommend caution. I would wait until automated unit test coverage is fairly high before considering running autoformatting across an entire codebase. And when doing so, be aware that this will essentially discard the existing git history and lose track of who wrote which code when (git blame, etc). This is a pretty steep trade-off. I personally don&#39;t value git history that much so I have my own point where I&#39;m OK running an autoformatter on a whole codebase, just be sure to understand the trade-offs when making this decision.</p> <p>If you do have valuable history in git that you don&#39;t want to mess up, another strategy to consider is file-by-file extraction of code into a set that is autoformatted and a set that is left untouched.</p> <h2 id="linting">Linting</h2> <p><code>eslint</code> and particularly <a href="https://www.npmjs.com/package/eslint-stats">eslint-stats</a> can be helpful guides. However, due to concerns about adding bugs in untested code, I don&#39;t recommend changing the code based on eslint warnings/errors until after you have unit tests coverage. These can help identify troublesome areas in the code, but don&#39;t go into the codebase and fix eslint issues throughout without unit test coverage.</p> <h2 id="a-side-note-about-promises">A side note about promises</h2> <p>Async control flow in node.js is hard. It&#39;s hard enough that many developers never really learn to write it correctly. This is true both in the callback paradigm and with promises as well. However, promises in particular seem to be an area of pervasive misuse and misunderstanding. Every node.js codebase I&#39;ve found that uses promises has incorrect promise usage baked into core patterns and then repeatedly copied throughout the codebase. As a community, we really missed the mark with promises when it comes to education and tooling. Even with the eslint promise plugin, there&#39;s a huge number of issues that are plain as day to me and no eslint plugin I&#39;ve found even detects them as warnings. I&#39;m optimistic that as <code>async/await</code> takes over as the basic mechanism for async control flow, things will improve for the easy case of a series of sequential operations. However, I think we&#39;ll still have a mess to deal with for any case with looping or complex control flow patterns.</p> <h2 id="node-js-rescue-checklist">node.js rescue checklist</h2> <ul> <li>Get the code into a git repository</li> <li>Document key processes in the README<ul> <li>Initial developer setup</li> <li>Standard development task flow</li> <li>Overview of tech stack, deployment, integrations</li> </ul> </li> <li>Ensure dependencies are properly tracked and documented</li> <li>Ensure the server starts and runs</li> <li>Establish a core unit testing stack</li> <li>Get tests running in CI</li> <li>Set up logging and maybe an error tracking service</li> <li>Set up code coverage reports</li> <li>Set up linting</li> <li>Track bugs in a bug tracker</li> <li>Reproduce bugs in unit tests</li> <li>Add unit tests to get substantial code coverage</li> <li>Set up autoformatting</li> </ul> Linux Mint 19 Cinnamon2018-07-04T19:48:31.862Z//peterlyons.com/problog/2018/07/linux-mint-19-cinnamon<h2 id="xfce4-problems">xfce4 problems</h2> <p>I had some problems with xfce4 that eventually made it untenable.</p> <ul> <li>No way to make <code>ctrl-w</code> always close browser tab instead of being an emacs kill word keybinding</li> <li>xfwm4 would occassionally lock up and prevent me from dragging windows around. I had to run <code>xfwm4 --replace</code> to fix it. This is kind of a deal breaker for a window manager.</li> <li>Reordering window buttons in the window list by drag and drop didn&#39;t work</li> <li>I couldn&#39;t find good keyboard shortcuts to switch tabs in xfce4-terminal</li> <li>Sometimes on resume, wifi wouldn&#39;t work</li> <li>Laptop would not suspend properly if screensaver was active (so ridiculous)</li> <li>Keyboard shortcuts and other keyboard settings scattered around like 4 different settings apps<ul> <li>There&#39;s even 2 different apps for &quot;Window Manager Settings&quot; and &quot;Window Manager Tweaks&quot; FFS. Let&#39;s just send users on scavenger hunts recreationally.</li> </ul> </li> </ul> <h2 id="linux-mint-19-cinnamon">Linux Mint 19 Cinnamon</h2> <p>After experimenting with the live USB image in beta, Linux Mint 19 Cinnamon went full release recently and I installed that same day. Even though I&#39;ve run linux on and off since ~1999, this was the first time I kept my home directory on a separate partition and could reinstall the OS without wiping my home directory. This has turned out to be awesome because linux does a pretty good job of keeping most personal settings in your home directory, so even after the reinstall, a lot of things continued to have my customizations. For example the actions assigned to my extra mouse buttons are configured in <code>~/.xbindkeysrc</code> and that survives the reinstall just fine.</p> <p>So now I&#39;ve got lots of nice things working in Cinnamon.</p> <ul> <li><code>ctrl+a</code> does select all properly<ul> <li>I just use <code>home</code>/<code>end</code> instead of emacs keybindings as needed</li> </ul> </li> <li><code>ctrl+w</code> closes a chrome tab properly</li> <li>I can close the laptop lid and know the OS will actually suspend</li> <li>pomodoro applet in my panel</li> <li><code>super+right</code>, <code>super+left</code> to switch tabs in gnome-terminal</li> <li>can reorder window buttons in the window list</li> <li>gpaste clipboard history and panel applet</li> <li>System Settings GUI is more clearly organized and usable</li> </ul> <p>I also tweaked my ergodox layout to give me F11 and F12 keys which I use to switch to adjacent workspaces, and I also needed a more accessiblyALT modifier, which generally I try to avoid but it&#39;s useful for some things. I really wish I could make <code>super+right</code>/<code>supert+left</code> switch tabs in chrome.</p> Linux Setup Progress2018-04-21T22:42:02.765Z//peterlyons.com/problog/2018/04/linux-setup-progress<p>Here&#39;s some miscellaneous notes on what I&#39;ve gotten set up on my new linux laptop so far.</p> <h2 id="distribution-and-desktop-environment">Distribution and Desktop Environment</h2> <p>I tried 3:</p> <ul> <li>PopOS (ubuntu tweaked for system76, gnome-shell desktop)</li> <li>Xubuntu (ubuntu with Xfce4 desktop)</li> <li>Linux Mint Xfce4 (ubuntu with xfce4 desktop)</li> </ul> <p>I like to think that I don&#39;t care that much about my desktop environment but I couldn&#39;t find a good clipboard manager for gnome shell and that was enough to make my try some others.</p> <p>I don&#39;t know what went on with my Xubuntu install but it was a disaster. No gui to set up wifi out of the box and weird crashiness. I ran away screaming.</p> <p>Linux Mint Xfce4 edition is where I landed and so far is working OK.</p> <h2 id="text-snippets">Text Snippets</h2> <p>On mac I had a keyboard maestro macro that I loved that handled text expansion really nicely. I triggered it with <code>,,</code> and it would regex match the previous abbrevation, so to type my email I would type <code>em,,</code> and it would replace that with my email, based on the contents of <code>~/projects/snippets/em</code>. It was really easy to add new ones on the fly, replace them, etc.</p> <p>I haven&#39;t found a way to exactly match that trigger mechanism on linux (and honestly it took me a long time to arrive at that final solution on the mac). But I found something also great, just different. I have an xfce4 keyboard shortcut <code>ctrl-,</code> which runs a shell script I wrote which basically populates <a href="https://git.suckless.org/dmenu/">dmenu</a> with all of my abbreviations. This is nice because I get visual completion of the abbreviations. Once dmenu has handled selecting the abbreviation, my script reads the corresponding file and copies its content into the x11 clipboard via <code>xclip</code> then immediately pastes it as well via <code>xdotool</code> typing <code>ctrl-v</code> to complete the expansion. Also this dmenu approach means there&#39;s no abbreviation in the target app to delete. The main trick I had to figure out was terminal needs <code>ctrl-shift-v</code> to paste which is annoying. Here&#39;s a snippet of how I figured that out (get the PID of the current window and see if it&#39;s command is my terminal emulator)</p> <pre><code class="language-sh"> #check for terminal or not pid=$(xdotool getactivewindow getwindowpid) exec=$(cat /proc/${pid}/cmdline) if [[ &quot;${exec}&quot; == &quot;xfce4-terminal&quot; ]]; then xdotool key ctrl+shift+v else # paste into the active window via keyboard shortcut xdotool key ctrl+v fi</code></pre> <h2 id="commander">Commander</h2> <p>Commander is my heads-up-display style app where I trigger fancy scripts by name. Think shell but powered by python. I found a really great fit for this in <a href="https://github.com/lanoxx/tilda">tilda</a> which I have bound to <code>super-space</code> running commander it and works great.</p> <p>Things are different enough that I might rethink whether I really need a long-running python process. I might port commander from python to node and the startup time might be fast enough to just run every command as a separate process either just shell or node.</p> <h2 id="docker">Docker</h2> <p>This time around my plan is to do development with command line developer tools like node, npm, pip, etc always run in a docker container with a volume mount to just one directory containing a specific project, not my entire filesystem or entire home directory. I think it will mostly work but there might be some hassles and kinks.</p> <h2 id="sticky-keys">Sticky Keys</h2> <p>Sticky keys setup is actually super easy on Xfce4: there&#39;s a checkbox in the accessibility settings. Done. The only thing I had to tweak was to add an indicator widget to the xfce4 panel to show when modifier keys are stuck down. That package is called <code>indicator-xkbmod</code>.</p> <h2 id="function-key-app-hotkeys">Function Key App Hotkeys</h2> <p>I&#39;m used to mapping my function keys to apps: F1 is browser, F2 is terminal, etc. Now that I have multiple desktops again, I <em>might</em> move to some different approach, but so far I&#39;m trying to recreate that system. I have something that sort of works via <code>wmctrl</code> and <code>xprop</code> but there&#39;s some quirks I still need to figure out.</p> <h2 id="fewer-modifiers">Fewer modifiers</h2> <p>I&#39;m realizing how nice it is on mac to have 2 good modifier keys: command and control (which I have bound to my caps lock key). I guess I should think about swapping my super and alt keys so super gets the prime real estate on either side of the space bar and alt can be next to that which is in the unreachable palm of your hand area.</p> <p>In any case, my atom keybindings are going through a chaotic reorganization but hopefully everything will settle down soon.</p> <h2 id="emacs-key-bindings">Emacs Key Bindings</h2> <p>On mac, <code>ctrl-a</code> and <code>ctrl-e</code> will move the cursor the the start and end of a line (borrowing emacs default keybindings) and this works everywhere: terminal, GUI text editors, browsers, Apple Apps, etc. I&#39;m realizing on linux neither firefox nor chrome do this by default. Haven&#39;t looked into solving via extensions yet but either I&#39;ll do that or I&#39;ll become re-accustomed to using my hardware home/end keys again.</p> <h2 id="fonts">Fonts</h2> <p>Oh my God the default resolution/zoom/font situation is letters for ants small. For chrome I set the default zoom way high. For atom I set my theme font to the largest allowed value of 20pt and it&#39;s usable but still on the small side.</p> <h2 id="xfce4-settings-dialogs">Xfce4 Settings Dialogs</h2> <p>I&#39;ll chalk this one up to the absence of designers from the linux world, but every settings app in Xfce4 is designed as if it were a dialog box (which it isn&#39;t, it&#39;s a standalone application), and by default comes up occupying like 20% of the screen width and requiring tons of horizontal scrolling. I&#39;m in the habit of only working with maximized windows, so I find this to be a nuisance, but at least a quick <code>ctrl-m</code> maximizes them so they are usable.</p> Switching Back to Linux2018-04-16T03:31:48.841Z//peterlyons.com/problog/2018/04/switching-back-to-linux<p>So I bought a laptop from System76 with linux and I&#39;m slowly getting my tools and environment going. I&#39;ve been using macos on macbook for a while, maybe 8 years or so, not sure. I&#39;m pretty dang adjusted to it, and so far the shift back to linux has been pretty jarring. If I wait too long to write down thoughts, I&#39;ll skip them, so here&#39;s some stream-of-consciousness notes so far.</p> <h2 id="the-new-laptop-hardware">The New Laptop Hardware</h2> <ul> <li>nice<ul> <li>light weight</li> <li>small and light power adapter</li> <li>normal plug on the power adapter like a lamp not a giant heavy brick that falls out of the socket</li> <li>Keyboard has dedicated delete key, home, end, page up/down all of which I like (macbooks don&#39;t have these)</li> <li>For the price, I get a lot more ram, SSD, disk, CPU, and ports</li> <li>no dongles</li> </ul> </li> <li>not nice<ul> <li>space bar on the keyboard is glitchy</li> <li>trackpad is really really terrible. I&#39;m not sure I&#39;ll be able to use it effectively. Probably need a dedicated mouse that I always bring with my laptop.</li> </ul> </li> </ul> <h2 id="software-stuff">Software Stuff</h2> <p>I have a pretty long list of stuff I need to find linux equivalents for.</p> <p>The password manager situation is bleak. I use 1Password on my mac, but there&#39;s no native linux app. They have a chrome extension that works on linux, but not with local files, only with a paid cloud hosted account. I searched a long time for any viable process to get from 1Password to KeyPass without developing a new import/export tool and it looks like there&#39;s nothing ready to go, so I ponied up for the cloud account in the name of expediency. I&#39;ll probably try to write an import/export tool later but I&#39;m pretty much dead in the water without a working password manager so I wanted to get unblocked on that.</p> <p>It took a while for me to find the right commands to adjust the keyboard repeat settings, which it turns out I need to be in a very precise configuration or I find the keyboard unusable (this is the same for me on any computer). If I&#39;m working on a friend&#39;s laptop with slow key repeat for more than 5 minutes I get frustrated and have to configure their setting. I did eventually find what I need and also found a way to get them to run when I log in so that&#39;s square now. Oh and I found the equivalent of &quot;sticky keys&quot;. It&#39;s not as nice as on macos because there&#39;s no on-screen indication of stuck keys and AFAIK so far no easy way to turn it on/off but it&#39;s doing the main thing fine.</p> <p>For launching and activating applications with hotkeys I&#39;m using gnome keyboard shortcuts that run a script that uses <code>wmctrl</code> to find the intended window and activate it. This is so far a reasonably good substitute for my equivalent keyboard maestro macros.</p> <p>For my heads-up commander terminal, I easily found <code>tilda</code> which is basically exactly what I was looking for. Luckily commander is python and largely cross platform.</p> <p>WiFi &quot;just worked&quot; for real this time, I&#39;m pleased to report, even the captive portal at the coffee shop from which I write this post. Well, that&#39;s mostly true in that gnome detected the captive portal, but the actual portal consent page only worked in chrome not firefox.</p> <p>The screen and fonts have been a challenge so far. In general everything is too small, which I can solve probably with some combination of zoom settings and maybe a lower resolution but I want to do some more research before implementing something. I&#39;ve been hacking around it with a few font size increases and some browser zooming.</p> <p>In general the fonts look bad compared to macos but that&#39;s something that I adjust to quickly and stop noticing. Also that old familiar X windows pointer just looks pretty sorry. I&#39;m not sure why but I have a mental association with that mouse pointer icon and craptastic software.</p> <p>That&#39;s where I am now. It&#39;s more or less usable but I still have a pretty big laundry list of stuff to configure before I can start doing any actual project work on this laptop.</p> Denver Rust Meetup Reactivated2018-03-01T06:45:50.564Z//peterlyons.com/problog/2018/03/denver-rust-meetup-reactivated<p>So I just went to my first meetup of the Denver/Boulder Rust meetup group. The group has been around a while but went inactive for over a year. I was and am really excited to get it going again. The meetup went really well! We had about 20 attendees and folks seemed pretty engaged and interested. One of our planned speakers canceled at the last minute due to a family emergency, but Joel Dice and I just did what software developers normally do: expand time spent to fill up time allocated! Just kidding (kind of)! Actually, I only went 4 minutes over my 20 minute slot which for me was a personal best as without multiple practices my default length for &quot;talk about tech topic X&quot; is 90 minutes before I start too realize I&#39;m talking too long.</p> <p>After the talks we grouped up to work on stuff and I paired with <a href="https://twitter.com/DebugSteven">@DebugSteven</a> on trying to get the rustdoc HTML docs to stay collapsed if you click the collapser minus sign and reload the page (currently they forget and show up expanded again). We got a simple change coded and I&#39;ll be submitting a pull request shortly. Well, maybe not so shortly because it seems the <code>main.js</code> file we changed is actually part of the main &quot;rust&quot; repo with the entire language and a bunch of other tools. So even for me to do a <code>python x.py doc</code> seems to required several CPU-hours of compilation.</p> <p>I ordered too much food. I knew I was going to. On the phone I thought &quot;I&#39;m about to order too much food&quot;. Then I hung up and thought &quot;I just ordered too much food&quot;. I don&#39;t really know how this works, but luckily folks were willing to take all the leftovers home so I didn&#39;t end up having to deal with that.</p> StackOverflow 100K2018-02-15T13:16:57.039Z//peterlyons.com/problog/2018/02/stackoverflow-100k<p>I just crossed 100,000 reputation on stackoverflow. I&#39;ve posted 1618 answers. Most of this activity was around node.js and javascript, especially from when I was really focused on learning a new ecosystem between 2011 and 2014. Analytics indicate my answers have reached 6.3 million people, which I&#39;m very pleased with.</p> <p>I earned gold badges for javascript, node.js, and express. My favorite badge is &quot;Necromancer&quot; (answer a question more than 60 days later with a score of 5 or more), which I earned 11 times.</p> <p>The past two years or so have been mostly compound interest. A lot of it is from answering really, really basic questions early enough to become a repeat hit in search queries. The answer I put the most effort into is <a href="https://stackoverflow.com/a/19623507/266795">how to structure an express.js application</a>.</p> <p><a href="https://stackoverflow.com/users/266795/peter-lyons">Here&#39;s my stackoverflow profile</a></p> Hourly Billing Is OK2018-01-13T18:01:42.327Z//peterlyons.com/problog/2018/01/hourly-billing-is-ok<h2 id="tl-dr">TL;DR</h2> <ul> <li>fixed bid, value-priced business consulting is OK</li> <li>software consulting billed hourly is also OK</li> <li>they are distinct lines of work and expertise</li> </ul> <h2 id="background">Background</h2> <p>There are three highly-visible people advocating strongly against hourly billing and in favor of value pricing. Jonathan Stark is so jazzed about it that he wrote an e-book on it. Patrick McKenzie and Thomas Ptacek have also written fairly extensively along the same lines.</p> <h2 id="problems-with-hourly-billing">Problems with Hourly Billing</h2> <p>A lot of the writing I&#39;ve read and podcasts I&#39;ve heard on this topic is primarily focused on disparaging hourly billing for its many evils. Jonathan Stark is particularly vehement about this point going so far as to call it his mission to &quot;rid the world&quot; of hourly billing. He&#39;s also written an e-book with an insulting and ableist title that I will not be stating here or linking to.</p> <p>In my research for this article I found a bunch of specific bullet points written on an old HackerNews comment by Thomas Ptacek and responded to each one. That allowed me to get some feelings out and clarify my thoughts but I don&#39;t think it&#39;s valuable to post, so I&#39;m going to heavily summarize my points/responses here.</p> <p>The basic takeaway I got from reading these laundry lists of problems with hourly billing is they find it causes a host of behaviors on both sides that are value-distracted. This includes bickering or negotiating small details like the dollar figure per hour, the hours billed on a given invoice, or the quantity of invoices. They claim that it poorly positions you and incentivizes you to book maximum hours.</p> <p>None of these claims have manifested in my work. I don&#39;t negotiate my rate. I never have and no client has ever initiated a negotiation. It goes one of 3 ways. 1. The client ghosts after I first state my rate. So long! 2. The client comments that the rate is high but agrees to pay it and does so happily. 3. The client happily pays it with no comment. That&#39;s it.</p> <p>No client of mine has ever contested an invoice. I send 1-liner invoices every 2 weeks that take the form &quot;I worked N hours at rate X so your total due is Y&quot;. There&#39;s no breakdown of the hours into tasks.</p> <h2 id="the-real-message">The Real Message</h2> <p>I think most of the anti-hourly writing has a valuable message but that message has nothing to do with billing whatsoever.</p> <p><strong>Do value-focused consulting, not pure software development</strong>. I think that makes perfect sense. Don&#39;t focus exclusively on technical matters. Work with product owners, UX designers, marketing, and executives to discover and clarify clear business wins and laser focus on delivering that value. You can charge more doing this, and it&#39;s what I call &quot;software consulting&quot; as opposed to &quot;contract software development&quot; where you are either doing straight staff augmentation or outsourced software development and are walled off from more strategic involvement.</p> <p>I think some of the concerns about hourly billing are really concerns about software development vs software consulting but are misattributed to a billing/pricing issue.</p> <h2 id="what-about-those-20k-week-engagements">What about those $20K/week engagements</h2> <p>I think one of the main hooks here is Patrick McKenzie posting highly transparent articles about his big-ticket value-priced consulting engagements. The whole thing smacks of &quot;You could be sipping cocktails on a Caribbean beach&quot; BS a bit but here&#39;s what I think the underlying truth is.</p> <p>There is a totally separate business service here that is what Patrick McKenzie and Jonathan Stark do. It&#39;s business consulting. In particular they blend in a lot of technical know-how, but their service offering is fundamentally not in software development. They are basically MBAs who can code for hire. They deal in things like email marketing, copy writing, sales methodologies, A/B testing, sales funnel optimization, SEO, pricing strategy, etc. These are all great and valuable skills and value pricing those engagements sounds great! No objection to that. But my reading of their writing is they claim your average software consultant should aspire to do this, and I disagree. It&#39;s a totally different line of work requiring a totally different set of expertise. For me, it would be completely unsatisfying and I suspect many other developers would agree. I like coding and technical work. My consulting has a significant non-technical portion, but at the end of the day I&#39;m still spending a lot of time coding.</p> <h2 id="daily-rates">Daily Rates</h2> <p>Daily rates are fine. I don&#39;t consider them fundamentally different from hourly rates and if you are going to work exclusively for 1 client for a period, I recommend them and they should be more lucrative than hourly rates. However, having done 6+ months of daily rate, I prefer hourly personally even though I take home less cash. Basically I prefer to bill 3-5 super-solid productive hours a day and then go about my non-work life without a second thought, and daily billing isn&#39;t really compatible with the days when I only have 3 hours in me or I&#39;m blocked waiting for someone else. I can also interleave multiple clients more easily with hourly billing. My ideal workload both from a productivity and satisfaction is two active hourly clients.</p> <h2 id="when-value-pricing-works-for-me">When Value Pricing Works for Me</h2> <p>I have or would do value pricing for more well-defined &quot;productized consulting&quot; things I sometimes do. I charge flat value-based rates for training courses, for example. Because I work in software and don&#39;t teach the same material that many times without significant content updates, it&#39;s not as lucrative as it could be. But teaching the same course a dozen times is also not as rewarding or enjoyable to me personally. I would and have bid out code audits or retainers (slightly separate topic) fixed as well. But currently this is a small minority of my work.</p> <h2 id="please-change-your-tone">Please Change Your Tone</h2> <p>This is a plea to these authors to change their message and tone. Hourly billing is fine and powers a huge portion of the economy. It&#39;s simple, versatile, and resilient. The constant disparagement I read and hear hurts my feelings and makes me angry and frustrated and sad. I feel my lifestyle and that of many of my close family members and friends is being needlessly attacked and portrayed as evil and exploitive when it&#39;s actually fine. The world is complex. Calculating the value of work is extremely complex if not impossible in many/most situations. Value pricing is only even remotely possible in a particular subset of relatively straightforward situations.</p>