0ink.net
Not the site you are looking for...
2024-03-15T03:16:59+01:00
https://www.0ink.net/images/2021/0ink.png
Alejandro Liu
urn:uuid:93d8401b-95a4-ecff-657d-d0de2d7da659
Locking down SFTP
urn:uuid:bc01ad51-fca0-7429-3ced-79fd793cb8c6
2024-03-05T00:00:00+01:00
alex
<p><img src="/images/2024/sftp.png" alt="sftp" /></p>
<p>This is a small recipe to increase the security around a SFTP interface.</p>
<p>In the <code>/etc/ssh/sshd_config</code> file include the following settings:</p>
<pre><code class="language-text">Subsystem sftp internal-sftp</code></pre>
<p>This configures the sftp subsystem to use the internal sftp implementation.
This is because inside the chroot, we usually will not have the normal
<code>sftp-server</code> executable.</p>
<p>For each user that will be doing <code>sftp</code> do:</p>
<pre><code class="language-text">Match User sftp-only-user-name
ChrootDirectory /only/path
ForceCommand internal-sftp
X11Forwarding no
AllowTcpForwarding no
PermitTTY no</code></pre>
<p>Alternative you could do <code>Match Group</code> and have multiple sftp-only users in the
specified group.</p>
<p>The options are:</p>
<ul>
<li><code>ChrootDirectory /only/path</code> : Note that this directory must have mode <code>0755</code> and be
owned by root. If this is not the case, logins will fail with error: <br /><code>bad ownership or modes for chroot directory</code> \</li>
<li><code>ForceCommand internal-sftp</code> : Only allow <code>sftp</code>. No other command will be allowed.</li>
<li><code>X11Forwarding</code>, <code>AllowTcpForwarding</code>, <code>PermitTTY</code> as <code>no</code> : These make sure that
the remote user doesn't try to open holes at the SSH protocol levels.</li>
</ul>
<p>References:</p>
<ul>
<li><a href="https://www.baeldung.com/linux/openssh-internal-sftp-vs-sftp-server">https://www.baeldung.com/linux/openssh-internal-sftp-vs-sftp-server</a></li>
<li><a href="https://gist.github.com/kjellski/5940875">https://gist.github.com/kjellski/5940875</a></li>
<li><a href="https://serverfault.com/questions/584986/bad-ownership-or-modes-for-chroot-directory-component">https://serverfault.com/questions/584986/bad-ownership-or-modes-for-chroot-directory-component</a></li>
</ul>
<p><a href="https://www.flaticon.com/free-icons/sftp" title="sftp icons">Sftp icons created by Freepik - Flaticon</a></p>
Python GUI
urn:uuid:038c7fed-7cc6-8b91-c7bc-a561e251520f
2024-03-05T00:00:00+01:00
alex
<p><img src="/images/2024/pygui.jpg" alt="pygui" /></p>
<p>After looking a multiple options of GUI programming under <a href="https://www.python.org/">python</a> I
eventually settled for <a href="https://en.wikipedia.org/wiki/Tkinter">tkinter</a>. The main reason was that
<a href="https://en.wikipedia.org/wiki/Tkinter">tkinter</a> is very ubiquitous and initially though the learning
curve wuld have shorter as I was very used to GUI programming using
<a href="https://www.tcl.tk/">TCL/TK</a>. Turned out that what I known <a href="https://www.tcl.tk/">TCL/TK</a> did not translate
very well to <a href="https://en.wikipedia.org/wiki/Tkinter">tkinter</a> in <a href="https://www.python.org/">python</a>.</p>
<p>Also I found out some <strong>BASIC</strong> features that I was used to in <a href="https://www.tcl.tk/">TCL/TK</a> were
not available in <a href="https://www.python.org/">python</a>. For example:</p>
<ul>
<li>Implementing optional scrollbars</li>
<li>Scrollable frames</li>
</ul>
<p>At some time I considered using <a href="https://kivy.org/">kivy</a> but at the end, I did not. Since the
main advantage for it is that you can create mobile apps. But since my primary
phone is an iPhone, I don't think I would be able to create iPhone apps
due to Apple's walled garden restrictions.</p>
<p>So try things out, I wrote a couiple of scripts:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2020/pa-hints">patoggle</a> <br />This one is not that interesting as it only shows things on the screen but does not
have any inputs. But I though was a good starting project. <br /><img src="/images/2024/patoggle.png" alt="patoggle screenshot" /></li>
<li><a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2023/xprtmgr">xprtmgr</a> <br />This is a more complete application. It was a good learning experience. <br /><img src="/images/2024/xprtmgr.png" alt="xprtmgr screenshot" /></li>
</ul>
<p>I assume that as I get more experience, things should be easier.</p>
cisco bridging
urn:uuid:0aaef5d8-4a7b-055b-9ab1-fc58dbeb250b
2024-03-05T00:00:00+01:00
alex
<p><img src="/images/2024/cisco_logo.png" alt="cisco" /></p>
<p>This article is here as a reminder.</p>
<p>So, for testing, I needed to configure a
<a href="https://www.cisco.com/c/en/us/products/routers/cloud-services-router-1000v-series/index.html">Cisco CSR1000V virtual router</a> as a bridge. So I used a version 16 Cisco
IOS XE image. To make my life easier I used the "wizard" that runs the first
time to automatically configure bridgning. Ironically, this created an invalid
configuration.</p>
<p>Over the years, cisco has transitioned through multiple ways to configure bridging,
searching the Internet, it was not clear to me how to configure bridging. Eventually
I manage to configure using bridge domains. The configuration is as follows:</p>
<h2 id="Configure+spanning+tree+features" name="Configure+spanning+tree+features">Configure spanning tree features</h2>
<p>These are cisco global settings. For my test I was using the following:</p>
<pre><code class="language-text">spanning-tree mode rapid-pvst
spanning-tree loopguard default
spanning-tree portfast bpduguard default
spanning-tree extend system-id</code></pre>
<p>Some of these setting are ON by default, so in some cases you don't need to.</p>
<h2 id="Configure+bridge+domains" name="Configure+bridge+domains">Configure bridge domains</h2>
<pre><code class="language-text">bridge-domain 1
bridge-domain 200
bridge-domain 201</code></pre>
<p>Actually, these are not needed as they are automatically created when configuring
the bridge-domain interfaces. However, these would show on the <code>running-config</code>.</p>
<h2 id="Configure+bridge+members" name="Configure+bridge+members">Configure bridge members</h2>
<p>For network interface, you need:</p>
<pre><code class="language-text">interface GigabitEthernet1
no ip address
service instance 1 ethernet
encapsulation dot1q 1
bridge-domain 1
!
service instance 200 ethernet
encapsulation dot1q 200
bridge-domain 200
!
service instance 201 ethernet
encapsulation dot1q 201
bridge-domain 201
!</code></pre>
<ul>
<li>The interface line for the given port that is part of the switch.</li>
<li><code>no ip address</code> : We are doing Layer-2, so no IP is needed.</li>
<li>For each VLAN that we are bridging we need:
<ul>
<li><code>service instance ID ethernet</code></li>
<li><code>encapsupation dot1q VLAN_ID</code></li>
<li><code>bridge-domain ID</code></li>
</ul></li>
<li>Note that I made the VLAN_ID the same as the instance ID and the bridge-domain ID.
This is not necessary but makes things less confusing.</li>
<li><code>encapsulation</code> is used for VLAN tagging. It is possible to use <code>encapsulation untagged</code>.
However, Spanning Tree protocol doesn't run on the untagged VLAN.</li>
</ul>
Tunneling NFS over SSH
urn:uuid:394c86b9-55b2-509e-d8c6-2f12f4bbf4b4
2024-03-05T00:00:00+01:00
alex
<p><img src="/images/2024/padlock.png" alt="padlock" /></p>
<p>This recipe is for tunneling NFS traffic over SSH. This adds encryption
and Public Key authentication to otherwise insecure NFS traffic.</p>
<p>For this recipe to work, requires NFSv4. Earlier versions were
not tested, but I expect not all the functionality to work.</p>
<h2 id="server+configuration" name="server+configuration">server configuration</h2>
<p>Install packages:</p>
<ul>
<li>nfs-kernel-server</li>
<li>ncat or netcat-openbsd</li>
</ul>
<p>Configure <code>/etc/exports</code>. Add a line:</p>
<pre><code class="language-bash">/export/path 127.0.0.1(insecure,... other options...)</code></pre>
<ul>
<li><code>/export/path</code>: File system to export</li>
<li><code>127.0.0.1</code>: Loopback address, we only allow local connections.</li>
<li><code>insecure</code>: Normally, the NFS server only allows connections from ports
less than 1024. This option removes that restriction. We need this because
the <code>ssh</code> traffic is running as a normal user.</li>
</ul>
<p>Additional NFS export options:</p>
<ul>
<li><code>rw</code> : Allow read/write access</li>
<li><code>sync</code> : sync I/O (recommended to prevent data loss)</li>
<li><code>no_subtree_check</code> : When exporting full filesystem, this remove the subtree checks.
This has to do with the fact that <code>NFS</code> uses <code>inodes</code>. This check is needed to
make sure that the <code>inode</code> is within the exported filesystem sub-tree. However
if you are exporting the entire filesystem, there should never be the case
that an <code>inode</code> falls outside the <code>subtree</code>.</li>
<li><code>no_root_squash</code> : allow root access</li>
<li><code>mountpoint=/mount/path</code> : Only export if a filesystem is mounted on <code>/mount/path</code></li>
</ul>
<p>Obtain a public/private key. Either use <code>ssh-keygen</code> or copy it from elsewhere.
Install <code>authorized_keys</code> on the account to be used for SSH/TCP forwarding:</p>
<pre><code class="language-bash">command="nc -N localhost 2049",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa ...
</code></pre>
<p>This account does not need to be <code>root</code>. The additional settings make sure that this
key can only be used for forwarding traffic.</p>
<h2 id="client+configuration" name="client+configuration">client configuration</h2>
<p>This is for <a href="https://en.wikipedia.org/wiki/Ubuntu">Ubuntu</a>. Other distros may need different packages
and/or approaches.</p>
<p>Install packages:</p>
<ul>
<li>nfs-common</li>
<li>ncat (netcat-openbsd is not enough, ncat needs to support -e or -c)</li>
</ul>
<p>Make sure the NFS server is in the SSH forwarder's <code>known_hosts</code> file. Use this command
to make sure this happens:</p>
<pre><code class="language-bash">ssh -n -i $ssh_key -T \
-o StrictHostKeyChecking=accept-new \
-o BatchMode=yes \
-o ConnectTimeout=10 \
$nfs_server</code></pre>
<p>In this approach we are using <code>ncat</code> to implement <code>SSH</code> forwarder. Run this
command from the <code>/etc/rc.local</code> file:</p>
<pre><code class="language-bash">lport=4096
ssh_key=/path/to/ssh/private/key
ssh_opts="-o BatchMode=yes -o ConnectTimeout=10 -a -C"
nfs_srv=nfs-server
( ncat -l $lport -k --allow localhost -c "exec ssh -i $ssh_key $ssh_opts $nfs_srv" ) &</code></pre>
<p>Options:</p>
<ul>
<li><code>-a</code> : disable agent forwarding</li>
<li><code>-C</code> : request compression</li>
<li><code>-T</code> : disable pty</li>
</ul>
<p>An alternative to this is to use <code>inetd.conf</code> or use <a href="http://0pointer.de/blog/projects/inetd.html">systemd</a>.</p>
<p>At this point we are ready to mount NFS filesystems:</p>
<pre><code class="language-bash">mount -t nfs \
-o nfsvers=4,nolock,nosuid,nodev,port=4096,sec=sys,tcp,soft,intr,fg \
localhost:/export/nfs/path \
/mnt</code></pre>
<p>Options:</p>
<ul>
<li><code>nfsvers=4</code> : make sure we are running NFSv4</li>
<li><code>nosuid</code> : disable SUID executables</li>
<li><code>nodev</code> : disable device files</li>
<li><code>sec=sys</code> : traditional UNIX security modes</li>
<li><code>tcp</code> : use TCP protocol</li>
<li><code>soft,intr</code> : how we handle CTRL+C and other errors</li>
</ul>
<h2 id="Caveats" name="Caveats">Caveats</h2>
<ul>
<li><code>showmount</code> command does not show NFSv4 information.</li>
<li>I have not tried to use this with <code>autofs</code>. <code>autofs</code> configure for NFS shares is in
<code>/etc/autofs/auto.net</code>, but it uses <code>showmount</code> command, so it probably would not
work out of the box.</li>
</ul>
<h2 id="Notes" name="Notes">Notes</h2>
<p>Some implementations of the <code>nc</code> command supports a <code>-p source_port</code> option. This would
remove the need to use the <code>insecure</code> option from the nfs export options. However this
requires netcat to run as root.</p>
<hr />
<p>References:</p>
<ul>
<li><a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/s1-nfs-client-config-options">Common NFS mount options</a></li>
<li><a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-nfs-server-config-exports">/etc/exports documentation</a></li>
<li><a href="https://en.wikipedia.org/wiki/Ubuntu">ubuntu</a></li>
<li><a href="https://help.ubuntu.com/community/Autofs">autofs help</a></li>
<li><a href="https://help.ubuntu.com/community/NFSv4Howto">Ubuntu NFSv4 HOWTO</a></li>
</ul>
<hr />
<p><a href="https://www.flaticon.com/free-icons/lock" title="lock icons">Lock icons created by Freepik - Flaticon</a></p>
Optimizing shell scripts
urn:uuid:492207d2-c5cb-f054-5233-5cb2048f981a
2024-03-05T00:00:00+01:00
alex
<div id="toc"><ul>
<li><a href="#Introduction">Introduction</a></li>
<li><a href="#Input+Data">Input Data</a></li>
<li><a href="#Desired+Output">Desired Output</a></li>
<li><a href="#Approach">Approach</a></li>
<li><a href="#Original+Script">Original Script</a></li>
<li><a href="#Optimized+Script">Optimized Script</a>
<ul>
<li><a href="#Moving+invariant+code+out+of+loops">Moving invariant code out of loops</a></li>
<li><a href="#Replacing+if%2Fthen%2Felse+with+case">Replacing if/then/else with case</a></li>
<li><a href="#Using+IFS+for+parsing">Using IFS for parsing</a></li>
</ul></li>
<li><a href="#Conclusion">Conclusion</a></li>
</ul></div>
<hr />
<p><a href="https://bashlogo.com/"><img src="/images/2024/bash_logo.png" alt="Bash Logo" /></a></p>
<h2 id="Introduction" name="Introduction">Introduction</h2>
<p>I consider myself a fairly competent shell scripter. I typically prefer
to program towards readabilty, but at the end tend to write towards
terse code.</p>
<p>Usually, readability is important because other people (including myself
in the future) will need to read the code and figure out what is going
on.</p>
<p>I think because I used to write <a href="https://en.wikipedia.org/wiki/Perl">perl</a> for a long time, I tend also
to write fairly terse code. This is not something to be proud of.</p>
<p>The other day, I was writing a small shell script to generate a menu of
links in <a href="https://en.wikipedia.org/wiki/Markdown">markdown</a> from a simple specially written text file.</p>
<h2 id="Input+Data" name="Input+Data">Input Data</h2>
<p>See input:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2024/optimiz/input.txt"></script>
<h2 id="Desired+Output" name="Desired+Output">Desired Output</h2>
<p>See output:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2024/optimiz/output.md"></script>
<h2 id="Approach" name="Approach">Approach</h2>
<p>I initially wrote it on my PC with a comparatively faster CPU. The response
time on my PC was about one second. When I transferred the script
to my Web server (with an small CPU), the reponse time was 15 seconds to
generate the menu. This was surprising to me, as I did not expect the
performance difference between my PC and my Web server to be so massive.</p>
<p>I figured that this need to be optimized. The simplest way to do this is to
add caching. Which I did very quickly. In reality, I needed to improve the code.</p>
<p>The first thing you need to do when optimizing code is to measure the effects of
changes on the code. In order to do that, I created a simple test harness:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2024/optimiz/th.cgi"></script>
<p>The options to the harness are:</p>
<ul>
<li><code>-1</code> or <code>--no-count</code> : run the code once. Essentially to check that the output is correct.</li>
<li><code>--count=int</code> : Will run the code the specified number of times</li>
<li><code>1</code> or <code>2</code> : Run implementation 1 or implementation 2.</li>
</ul>
<p>Run this with the <code>time</code> built-in to measure running times.</p>
<p>So running:</p>
<pre><code class="language-bash">sh th.cgi -1 1</code></pre>
<p>or</p>
<pre><code class="language-bash">sh th.cgi -1 2</code></pre>
<p>Can be used to make sure that implementation <strong>1</strong> and <strong>2</strong> are correct.</p>
<p>Then, you can run:</p>
<pre><code class="language-bash">time sh th.cgi --count=100 1</code></pre>
<p>or</p>
<pre><code class="language-bash">time sh th.cgi --count=100 2</code></pre>
<p>This will show the how fast/slow the code peforms. For testing, I would also
test with <code>busybox sh</code>, as that is the <code>shell</code> that I have in my web server. On
my PC, <code>sh</code> is <a href="https://en.wikipedia.org/wiki/Bash_(Unix_shell)">bash</a></p>
<p>So the general approach is to measure the original performance, then make some
changes and measure if the performance improves (or not).</p>
<h2 id="Original+Script" name="Original+Script">Original Script</h2>
<p>The original un-optimized code is:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2024/optimiz/th1.cgi"></script>
<p>Running this on my PC executes in almost 10 minutes.</p>
<h2 id="Optimized+Script" name="Optimized+Script">Optimized Script</h2>
<p>The optimized version is:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2024/optimiz/th2.cgi"></script>
<p>This version executes in 16 seconds.</p>
<p>So overall is a pretty good performance enhancement. So the approach I took is first to isolate
what part of the code is the one that executes the slowest. This script essentially runs in
a loop and runs on two halves, the first half is scanning lines, the second half is outputing
markdown.</p>
<p>Commenting out the different parts of the code, I was able to determine that most of the
time was spent in the scanning half.</p>
<p>The next step was to review the code and re-factor it to faster versions:</p>
<h3 id="Moving+invariant+code+out+of+loops" name="Moving+invariant+code+out+of+loops">Moving invariant code out of loops</h3>
<p>Move invariable code outside the loop. Assigning <code>elem2o</code> and <code>elem2c</code> was
originally done in the loop. This was moved outside the loop. This did not yield
much improvement, but in general this is always a good optimizaiton.</p>
<h3 id="Replacing+if%2Fthen%2Felse+with+case" name="Replacing+if%2Fthen%2Felse+with+case">Replacing if/then/else with case</h3>
<p>I was doing:</p>
<pre><code class="language-bash">if (echo "$ARGS" | grep -q '>') ; then
...
else
...
fi</code></pre>
<p>This actually forks and execs two additional commands and a subshell. This was replaced with:</p>
<pre><code class="language-bash">case "$ARGS" in
*\>*)
...
;;
*)
...
;;
esac</code></pre>
<p>Since this runs entirely inside the shell, this removes spawning commands and forking subshells.</p>
<h3 id="Using+IFS+for+parsing" name="Using+IFS+for+parsing">Using IFS for parsing</h3>
<p>I replaced:</p>
<pre><code class="language-bash">local i=1
while [ -n "$(echo "$ARGS" | cut -d'>' -f$i-)" ] ; do
set - "$@" "$(echo "$ARGS" | cut -d'>' -f$i | xargs)"
i=$(expr $i + 1)
done</code></pre>
<p>Which is a terrible way to parse a line. This was replaced with:</p>
<p>Adding <code>-e 's/[ ]*>[ ]*/>/g'</code> to the <code>sed</code> command at the top in front of
the loop, and then...</p>
<pre><code class="language-bash">IFS=">" ; set - $ARGS; IFS="$oIFS"</code></pre>
<p>The performance difference here is <em>massive</em>.</p>
<h2 id="Conclusion" name="Conclusion">Conclusion</h2>
<p>So there you have it. Turns out that my skills at <code>scripting</code> are poor. On the other
hand it could be that I am so used writing shell scripts that I am not using the best
tools for the job. In this particular example, I shouldn't have started with shell
script but use something more suitable for text manipulation, such as <a href="https://en.wikipedia.org/wiki/Perl">perl</a> or
<a href="https://en.wikipedia.org/wiki/AWK">awk</a>. However, most scripting languages come with useful string processing
capabilities and would have done the job well.</p>
Happy New Year 2024
urn:uuid:37d41fbf-2a6c-997e-cbc8-529ef895dff3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2024/newyear-2024.png" alt="NewYear2024" /></p>
<p>Best wishes for 2024!</p>
IPv6 on 2023
urn:uuid:e5620601-9263-1c06-e2f8-eb77cfed3187
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a sequel to my article <a href="/posts/2021/2021-12-21-ipv6-blues.html">IPv6 blues</a>.</p>
<hr />
<div id="toc"><ul>
<li><a href="#Layout">Layout</a></li>
<li><a href="#Enabling+forwarding">Enabling forwarding</a></li>
<li><a href="#Configure+networking">Configure networking</a></li>
<li><a href="#Prefix+delegation">Prefix delegation</a></li>
<li><a href="#Router+advertisement">Router advertisement</a></li>
<li><a href="#Conclusion">Conclusion</a></li>
</ul></div>
<hr />
<p>At the time it looked that only /64 prefix address was being allocated.
However, when I recently checked my ADSL modem router configuration I
noticed that actually the ADSL modem gets assigned /48 prefix. Thids makes
the configuration <em>much</em> easier.</p>
<p>What is nice, is that for <strong>"reasonable"</strong> configurations, IPv6 usually does the
right thing and configures most things by itself. For IPv6 routing, you only
need to enable the right functionality and most of the addressing is determined
automatically.</p>
<p>This article expands on this
<a href="https://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a_Raspberry_Pi_(IPv6)">Alpine Linux Wiki article</a>.</p>
<h2 id="Layout" name="Layout">Layout</h2>
<p>This is a very simple configuration.</p>
<div><svg class="bob" font-family="arial" font-size="14" height="64" width="432" xmlns="http://www.w3.org/2000/svg">
<defs>
<marker id="triangle" markerHeight="10" markerUnits="strokeWidth" markerWidth="10" orient="auto" refX="15" refY="10" viewBox="0 0 50 20">
<path d="M 0 0 L 30 10 L 0 20 z"/>
</marker>
</defs>
<style>
line, path {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle.solid {
fill:black;
}
circle.open {
fill:transparent;
}
tspan.head{
fill: none;
stroke: none;
}
</style>
<path d=" M 4 24 L 8 24 M 4 24 L 4 32 M 8 24 L 16 24 M 8 24 L 16 24 L 24 24 M 16 24 L 24 24 L 32 24 M 24 24 L 32 24 L 40 24 M 32 24 L 40 24 L 48 24 M 40 24 L 48 24 L 56 24 M 48 24 L 56 24 L 64 24 M 56 24 L 64 24 L 72 24 M 64 24 L 72 24 L 80 24 M 72 24 L 80 24 M 84 24 L 80 24 M 84 24 L 84 32 M 188 24 L 192 24 M 188 24 L 188 32 M 192 24 L 200 24 M 192 24 L 200 24 L 208 24 M 200 24 L 208 24 L 216 24 M 208 24 L 216 24 L 224 24 M 216 24 L 224 24 L 232 24 M 224 24 L 232 24 L 240 24 M 232 24 L 240 24 M 244 24 L 240 24 M 244 24 L 244 32 M 324 24 L 328 24 M 324 24 L 324 32 M 328 24 L 336 24 M 328 24 L 336 24 L 344 24 M 336 24 L 344 24 L 352 24 M 344 24 L 352 24 L 360 24 M 352 24 L 360 24 L 368 24 M 360 24 L 368 24 L 376 24 M 368 24 L 376 24 L 384 24 M 376 24 L 384 24 L 392 24 M 384 24 L 392 24 L 400 24 M 392 24 L 400 24 L 408 24 M 400 24 L 408 24 L 416 24 M 408 24 L 416 24 L 424 24 M 416 24 L 424 24 M 428 24 L 424 24 M 428 24 L 428 32 M 4 32 L 4 48 M 4 32 L 4 48 M 84 32 L 84 48 M 84 32 L 84 48 M 96 40 L 84 40 M 96 40 L 104 40 M 96 40 L 104 40 L 112 40 M 104 40 L 112 40 L 120 40 M 112 40 L 120 40 L 128 40 M 120 40 L 128 40 L 136 40 M 128 40 L 136 40 L 144 40 M 136 40 L 144 40 L 152 40 M 144 40 L 152 40 L 160 40 M 152 40 L 160 40 L 168 40 M 160 40 L 168 40 L 188 40 M 168 40 L 188 40 M 176 40 L 188 40 M 188 32 L 188 48 M 188 32 L 188 48 M 244 32 L 244 48 M 244 32 L 244 48 M 256 40 L 244 40 M 256 40 L 264 40 M 256 40 L 264 40 L 272 40 M 264 40 L 272 40 L 280 40 M 272 40 L 280 40 L 288 40 M 280 40 L 288 40 L 296 40 M 288 40 L 296 40 L 304 40 M 296 40 L 304 40 L 324 40 M 304 40 L 324 40 M 312 40 L 324 40 M 324 32 L 324 48 M 324 32 L 324 48 M 428 32 L 428 48 M 428 32 L 428 48 M 4 56 L 4 48 M 4 56 L 8 56 L 16 56 M 8 56 L 16 56 L 24 56 M 16 56 L 24 56 L 32 56 M 24 56 L 32 56 L 40 56 M 32 56 L 40 56 L 48 56 M 40 56 L 48 56 L 56 56 M 48 56 L 56 56 L 64 56 M 56 56 L 64 56 L 72 56 M 64 56 L 72 56 L 80 56 M 72 56 L 80 56 M 84 56 L 84 48 M 84 56 L 80 56 M 188 56 L 188 48 M 188 56 L 192 56 L 200 56 M 192 56 L 200 56 L 208 56 M 200 56 L 208 56 L 216 56 M 208 56 L 216 56 L 224 56 M 216 56 L 224 56 L 232 56 M 224 56 L 232 56 L 240 56 M 232 56 L 240 56 M 244 56 L 244 48 M 244 56 L 240 56 M 324 56 L 324 48 M 324 56 L 328 56 L 336 56 M 328 56 L 336 56 L 344 56 M 336 56 L 344 56 L 352 56 M 344 56 L 352 56 L 360 56 M 352 56 L 360 56 L 368 56 M 360 56 L 368 56 L 376 56 M 368 56 L 376 56 L 384 56 M 376 56 L 384 56 L 392 56 M 384 56 L 392 56 L 400 56 M 392 56 L 400 56 L 408 56 M 400 56 L 408 56 L 416 56 M 408 56 L 416 56 L 424 56 M 416 56 L 424 56 M 428 56 L 428 48 M 428 56 L 424 56" fill="none"/>
<path d="" fill="none" stroke-dasharray="3 3"/>
<text x="145" y="28">
eth0
</text>
<text x="9" y="44">
KPN
</text>
<text x="41" y="44">
modem
</text>
<text x="193" y="44">
router
</text>
<text x="329" y="44">
HOME
</text>
<text x="369" y="44">
NETWORK
</text>
<text x="257" y="60">
eth1
</text>
</svg>
</div>
<p>The current Alpine Linux kernel (v3.17.3) has IPv6 enabled by default, so nothing
special needs to be done for that.</p>
<h2 id="Enabling+forwarding" name="Enabling+forwarding">Enabling forwarding</h2>
<p>By deault, IP (v4 or v6) forward is disabled on a Linux kernel. To enable, you
need to modify <code>syctl.conf</code>. Create a file:</p>
<p><code>/etc/sysctl.d/router.conf</code></p>
<pre><code># Controls IP packet forwarding
net.ipv4.ip_forward = 1
# http://vk5tu.livejournal.com/37206.html
# What's this special value "2"? Originally the value was "1", but this
# disabled autoconfiguration on all interfaces. That is, you couldn't appear
# to be a router on some interfaces and appear to be a host on other
# interfaces. But that's exactly the mental model of a ADSL router.
# Controls IP packet forwarding
net.ipv6.conf.all.forwarding = 2
net.ipv6.conf.default.forwarding = 2
# Accept Router Advertisments
net.ipv6.conf.all.accept_ra = 2
net.ipv6.conf.default.accept_ra = 2
# We are a router so disable temporary addresses
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
</code></pre>
<h2 id="Configure+networking" name="Configure+networking">Configure networking</h2>
<p>Configure IPv6 address for <code>eth1</code>. We don't need to configure <code>eth0</code> as that will be
done by <code>dhcpcd</code> through the ISP's router advertisements:</p>
<pre><code># Conect to ISP
auto eth0
iface eth0 inet static
address 192.168.2.250
netmask 255.255.255.0
broadcast 192.168.2.255
# Connected to local LAN
auto eth1
iface eth1 inet static
address 192.168.3.1
netmask 255.255.255.0
broadcast 192.168.3.255
iface eth1 inet6 static
address fde4:8dba:82e1:fff4::1
netmask 64
autoconf 0
accept_ra 0
privext 0</code></pre>
<h2 id="Prefix+delegation" name="Prefix+delegation">Prefix delegation</h2>
<p>The next step will be to configure DHCPv6 Prefix Delegation with your ISP.
Install <code>dhcpcd</code>.</p>
<pre><code>apk add dhcpcd
</code></pre>
<p>Configure it:</p>
<p><code>/etc/dhcpcd.conf</code></p>
<pre><code>
# Enable extra debugging
#debug
#logfile /var/log/dhcpcd.log
# Allow users of this group to interact with dhcpcd via the control
# socket.
#controlgroup wheel
# Inform the DHCP server of our hostname for DDNS.
hostname gateway
# Use the hardware address of the interface for the Client ID.
#clientid
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as
# per RFC4361. Some non-RFC compliant DHCP servers do not reply with
# this set. In this case, comment out duid and enable clientid above.
duid
# Persist interface configuration when dhcpcd exits.
persistent
# Rapid commit support.
# Safe to enable by default because it requires the equivalent option
# set on the server to actually work.
option rapid_commit
# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Most distributions have NTP support.
option ntp_servers
# Respect the network MTU.
# Some interface drivers reset when changing the MTU so disabled by
# default.
#option interface_mtu
# A ServerID is required by RFC2131.
require dhcp_server_identifier
# Generate Stable Private IPv6 Addresses instead of hardware based
# ones
slaac private
# A hook script is provided to lookup the hostname if not set by the
# DHCP server, but it should not be run by default.
nohook lookup-hostname
# IPv6 Only
ipv6only
# Disable solicitations on all interfaces
noipv6rs
# Wait for IP before forking to background
waitip 6
# Don't touch DNS
nohook resolv.conf
# Use the interface connected to WAN
interface eth0
ipv6rs # enable routing solicitation get the default IPv6 route
iaid 1
ia_pd 1/::/56 eth1/2/64</code></pre>
<p>Add dhcpcd to the default run level:</p>
<pre><code>rc-update add dhcpcd default</code></pre>
<h2 id="Router+advertisement" name="Router+advertisement">Router advertisement</h2>
<p>Now we need to configure <code>radvd</code> to give router advertisements to our internal
network for addressing and routing.</p>
<pre><code>apk add radvd</code></pre>
<p>Once <code>radvd</code> is installed, you may configure it:</p>
<p><code>/etc/radvd.conf</code></p>
<pre><code>
interface eth0 {
# We are sending advertisements (route)
AdvSendAdvert on;
# When set, host use the administered (stateful) protocol
# for address autoconfiguration. The use of this flag is
# described in RFC 4862
AdvManagedFlag on;
# When set, host use the administered (stateful) protocol
# for address autoconfiguration. For other (non-address)
# information.
# The use of this flag is described in RFC 4862
AdvOtherConfigFlag on;
# Suggested Maximum Transmission setting for using the
# Hurricane Electric Tunnel Broker.
# AdvLinkMTU 1480;
# We have native Dual Stack IPv6 so we can use the regular MTU
# http://blogs.cisco.com/enterprise/ipv6-mtu-gotchas-and-other-icmp-issues
AdvLinkMTU 1500;
prefix ::/64 {
AdvOnLink on;
AdvAutonomous on; ## SLAAC based on EUI
AdvRouterAddr on;
};
};
interface eth1 {
AdvSendAdvert on;
AdvManagedFlag on;
AdvOtherConfigFlag on;
AdvLinkMTU 1500;
# Helps the route not get lost when on WiFi with packet loss
MaxRtrAdvInterval 30;
AdvDefaultLifetime 9000;
prefix fde4:8dba:82e1:fff3::/64 {
AdvOnLink on;
AdvAutonomous on; ## SLAAC based on EUI
};
};</code></pre>
<p>Add <code>radvd</code> to the default run level:</p>
<pre><code>rc-update add radvd default</code></pre>
<h2 id="Conclusion" name="Conclusion">Conclusion</h2>
<p>At this point you should have a working IPv6 set-up. Things that you may want to add:</p>
<ul>
<li>Firewall rules</li>
<li>Additional static routes and subnets</li>
<li>DHCP daemon configuration to have more control on IP address assignments</li>
<li>OpenVPN</li>
</ul>
libnss-db HOWTO
urn:uuid:f41cfd34-2ed5-5533-5d25-089b2e27e4a4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This mini-howto illustrate how to use <code>libnss-db</code> on a <a href="https://en.wikipedia.org/wiki/Ubuntu">Ubuntu</a>
Linux system.</p>
<p>Other installations should work to after adjusting package names and directory paths.</p>
<p>I myself use as a "serverless" lightweight user directory. Essentially, I mount the
db directory and the home directory from an NFS server.</p>
<h2 id="Package+installation" name="Package+installation">Package installation</h2>
<p>Install the following packages:</p>
<pre><code class="language-bash">apt install -y libnss-db make
</code></pre>
<p>This creates a directory <code>/var/lib/misc</code> in <a href="https://en.wikipedia.org/wiki/Ubuntu">Ubuntu</a>. Other distributions
this may be in <code>/var/db</code>.</p>
<h2 id="Preparing+nss-db+data+directory" name="Preparing+nss-db+data+directory">Preparing nss-db data directory</h2>
<p>I like to keep a separate set of users in the <code>db</code> files. For that, create a
directory <code>/var/lib/misc/etc</code>. This will contain the additional users and groups.</p>
<pre><code class="language-bash">for f in passwd group shadow
do
cp -av /etc/$f /var/lib/misc/etc/$f
> /var/lib/misc/etc/$f
done</code></pre>
<p>This creates, <code>passwd</code>, <code>group</code>, and <code>shadow</code> files. <strong>NOTE:</strong> <code>gshadow</code>
is unsupported. This is only relevant if you are using passwords to
control user group changes.</p>
<p>Because in this scenario, we are using flat files (i.e. <code>/etc/passwd</code> vs.
<code>/var/lib/misc/passwd.db</code>) and db files, we want to not have uid/gid overlaps.</p>
<p>Copy <code>/etc/login.defs</code> to <code>/var/lib/misc/etc/login.defs</code> and
change UID_MIN,UID_MAX,GID_MIN,GID_MAX to a different space (so as not
to overlap with the flatfiles spaces).</p>
<p>When users are created with the <code>useradd</code> command, you can pass the
<code>--prefix /var/lib/misc</code> argument, so then it would create users in
the <code>/var/lib/misc/etc</code> directory (ignoring <code>/etc</code>) and would get
defaults from <code>/var/lib/misc/etc/login.defs</code> (instead of <code>/etc/login.defs</code>).</p>
<p>For <code>/home</code> directories to be created properly I create the symlink:</p>
<pre><code class="language-bash">ln -s /home /var/lib/misc/home</code></pre>
<p>I also like to add <code>sudoers</code> configuration to the <code>/var/lib/misc</code> directory (so
it can be shared via NFS)</p>
<pre><code class="language-bash">cp -av /etc/sudoers.d $dbdir</code></pre>
<h2 id="Moving+nss-db+data" name="Moving+nss-db+data">Moving nss-db data</h2>
<p>If you are storing nss-db in a different location, you can use a <code>mount --bind</code>
to make it available in <code>/var/lib/misc</code>. This can be configured on <code>/etc/fstab</code>
as follows:</p>
<pre><code class="language-bash"># /etc/fstab
/mount/point/dir /var/lib/misc none defaults,bind 0 0</code></pre>
<h2 id="Configuring+nss-db" name="Configuring+nss-db">Configuring nss-db</h2>
<p>In <code>/var/lib/misc</code> there is a <code>Makefile</code> that is used to create the relevant
<code>db</code> files. To control how this you can configure things in
<code>/etc/default/libnss-db</code>:</p>
<pre><code># /etc/default/libnss-db
# settings for libnss-db
# Directory where the databases are kept
VAR_DB = /var/lib/misc
# Location of files
ETC = $(VAR_DB)/etc
# Databases to generate
DBS = passwd group shadow
# Programs used
AWK = awk
MAKEDB = makedb --quiet</code></pre>
<p>You must also add the <code>db</code> setting to the <code>/etc/nsswitch.conf</code> lines
for <code>passwd</code>, <code>group</code> and <code>shadow</code>.</p>
<pre><code class="language-bash"># Configure nsswitch.conf
sed -i~ \
-e 's/^\(passwd:[ \t]*\).*$/\1files systemd db/' \
-e 's/^\(group:[ \t]*\).*$/\1files systemd db/' \
-e 's/^\(shadow:[ \t]*\).*$/\1files db/' \
/etc/nsswitch.conf</code></pre>
<p>This is not strictly part of nss-db, but I like to add to <code>/etc/sudoers</code>
the line:</p>
<pre><code>@includedir /var/lib/misc/sudoers.d</code></pre>
<p>From then on, adding, modifying and removing users/groups should be done
on the files in <code>/var/lib/misc/etc</code>. Afterwards, use <code>make</code> in <code>/var/lib/misc</code>
to recrate <code>db</code> files.</p>
<p>For convenience I created a script in <code>/usr/local/bin</code> named <code>nssdb</code>:</p>
<pre><code class="language-bash">#!/bin/sh
#
# NSSDB command
#
nssdb_dir=/var/lib/misc
nssdb_opts="--prefix $nssdb_dir"
extra_group_add_opts="--key GID_MIN=13000 --key GID_MAX=13999"
if [ $# -eq 0 ] ; then
cat <<-_EOF_
Usage: $0 useradd|userdel|usermod|groupadd|groupdel|groupmod [options]
_EOF_
exit 1
fi
case "$1" in
useradd|userdel|usermod|groupdel|groupmod)
op="$1" ; shift
;;
groupadd)
op="$1" ; shift
nssdb_opts="$nssdb_opts $extra_group_add_opts"
;;
*) echo "$1: Unknown sub-command" ; exit 1
esac
"$op" $nssdb_opts "$@" && ( cd $nssdb_dir && make )</code></pre>
<p>What it does is that you can run:</p>
<ul>
<li><code>nssdb useradd</code></li>
<li><code>nssdb usermod</code></li>
<li><code>nssdb userdel</code></li>
<li><code>nssdb groupadd</code></li>
<li><code>nssdb groupmod</code></li>
<li><code>nssdb groupdel</code></li>
</ul>
<p>This will run the specified commands but with <code>--prefix</code> option so these
will modify files in <code>/var/lib/misc</code> and the <code>Makefile</code> called accordingly.</p>
<p>Note the following commands can not be supported:</p>
<ul>
<li><code>chfn</code>: does not support <code>--prefix</code> or <code>--root</code> options.</li>
<li><code>chsh</code>, <code>passwd</code>, <code>newusers</code> : only support <code>--root</code> which requires <code>root</code>
priviledges.</li>
</ul>
Alpine Linux Custom Interface names
urn:uuid:843f0670-7abe-b346-ec3f-3a1bfd9deccc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article is a copy of <a href="https://wiki.alpinelinux.org/wiki/Custom_network_interface_names">this article</a>
and shows how to rename/change name of a network interface.</p>
<p>Alpine Linux uses <code>busybox</code> <code>mdev</code> to manage devices in <code>/dev</code>. <code>mdev</code> reads <code>/etc/mdev.conf</code>
and according to <a href="https://git.busybox.net/busybox/plain/docs/mdev.txt">mdev documentation</a> one
can define a command to be executed per device definition.
The command which is going to be used to change network interface name is <code>nameif</code>.</p>
<h3 id="%2Fetc%2Fmdev.conf+configuration" name="%2Fetc%2Fmdev.conf+configuration"><code>/etc/mdev.conf</code> configuration</h3>
<pre><code>-SUBSYSTEM=net;DEVPATH=.*/net/.*;.* root:root 600 @/sbin/nameif -s</code></pre>
<p>Here we tell <code>mdev</code> to call <code>nameif</code> for devices found in <code>/sys/class/net/</code>.</p>
<pre><code># ls -d -C -1 /sys/class/net/eth*
/sys/class/net/eth1
/sys/class/net/eth2
/sys/class/net/eth3
/sys/class/net/eth4
/sys/class/net/eth5</code></pre>
<h3 id="nameif+configuration" name="nameif+configuration"><code>nameif</code> configuration</h3>
<p><code>nameif</code> itself reads <code>/etc/mactab'</code> by default. Example line for a network interface with
following hwaddr</p>
<pre><code># cat /sys/class/net/eth0/address
90:e2:ba:04:28:c0</code></pre>
<p>would be</p>
<pre><code># grep 90:e2:ba:04:28:c0 /etc/mactab
dmz0 90:e2:ba:04:28:c0</code></pre>
<h3 id="finalization" name="finalization">finalization</h3>
<p>To use renamed network interface without reboot, just call <code>nameif</code> while the network
interface is down.</p>
<pre><code># nameif -s</code></pre>
<p>And finally reboot...</p>
Alpine boot menu
urn:uuid:dbee9804-7406-df13-2f32-c87c26842068
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article is an update to my
<a href="/posts/2020/2020-10-04-alpine-boot-switcher.html">Alpine Boot Switcher</a> article.</p>
<hr />
<p>Contents:</p>
<div id="toc"><ul>
<li><a href="#Preparing+boot+device">Preparing boot device</a></li>
<li><a href="#Booting+the+system">Booting the system</a></li>
<li><a href="#Adding+a+new+kernel">Adding a new kernel</a></li>
<li><a href="#After+a+succesful+reboot">After a succesful reboot</a></li>
<li><a href="#Alternative+workflow">Alternative workflow</a></li>
<li><a href="#Removing+installed+kernels">Removing installed kernels</a></li>
<li><a href="#Downloads">Downloads</a></li>
</ul></div>
<hr />
<p>The weakness of that approach was that you needed to be on a running system
to select the active kernel. So, if you switched to a broken kernel then
you wouldn't be able to revert back without having to get the media device out
and modifying the file system to switch to a different kernel.</p>
<p>The approach described here makes use of syslinux or grub menu system to allow you to
select a new kernel. It requires modifying the <code>init</code> script in the
<code>initramfs</code> image so that you can specifically select the right APK boot repository
otherwise it will use the first one found in the boot file system.</p>
<p>This solution is suitable for physical and virtualized systems using BIOS and UEFI
boot methods.</p>
<p>The high level approach is:</p>
<ul>
<li>prepare a bootable USB or virtual disk image.</li>
<li>during system operation, use the <code>inst_iso</code> script to install a new
Alpine release.</li>
<li>You can use the <code>mkmenu</code> script to select the new kernel or ...</li>
<li>Re-boot the system, syslinux (on BIOS systems) or grub (on UEFI) systems
will show a menu to let you select the boot kernel or boot a default
kernel after 10 seconds (unless configured for a different time-out).</li>
<li>If the boot was succesful, you can use <code>mkmenu</code> to make the current
kernel the default kernel.</li>
<li>If the boot fails, then you can hard boot the system and use the boot
menu to select a known working kernel.</li>
</ul>
<h2 id="Preparing+boot+device" name="Preparing+boot+device">Preparing boot device</h2>
<p>Use the <code>mkuub</code> script to create a bootable USB drive or a bootable disk image.</p>
<p>The USB drive can be used
on a physical system and supports UEFI and BIOS boot methods (BIOS boot is untested). </p>
<p>The bootable disk image is meant to be used on a virtualized environment. I have tested
it on <code>virsh on kvm</code> and uses the BIOS boot method. It has UEFI boot support files but
I have not tested this.</p>
<p>Usage:</p>
<pre><code> ./mkuusb.sh [options] isofile [usbdev]</code></pre>
<p>Options:</p>
<ul>
<li><code>--serial</code> : Enable serial console</li>
<li><code>--ovl=ovlfile</code> : overlay file to use</li>
<li><code>--boot-label=label</code> : boot partition label <br />Defaults to a random label unless ovl is specifed.
In that case, it will take the label from the filesystem mounted as
<code>/media/boot</code> from <code>/etc/fstab</code>.</li>
<li><code>--boot-size=size</code> : boot partition size <br />If not specified it will default to the entire drive
or up to half the drive (if data partition is enabled)
up to 8GB.</li>
<li><code>--data</code> : create a data partition</li>
<li><code>--data-label=label</code> : label for data partition_disc <br />Defaults to a random label unless ovl is specifed
then will take the label from the filesystem mounted as
<code>/media/data</code> from <code>/etc/fstab</code>.</li>
<li><code>--data-size=size</code> : data partition size <br />size defaults to the remaining of the disk</li>
<li><code>isofile</code>: ISO file to use as the base alpine install</li>
<li><code>usbhdd</code> : <code>/dev/path</code> to the thumb drive that will be installed. <br />It will try to use a suitable default by looking an unused drive from
the currently connected drives on the system. Otherwise a target
image file can be specified using:
<ul>
<li><code>img:path/to/image/file[,size]</code></li>
</ul></li>
</ul>
<p>if <code>--serial</code> was used, the boot menu will be configure to use the first
serial port (Usually <code>COM1</code>) with a speed of 115200. So you can use
a serial console or the normal display to select the desired kernel.</p>
<p>When configured using <code>serial</code>, kernel boot message would be displayed on
the serial console.</p>
<h2 id="Booting+the+system" name="Booting+the+system">Booting the system</h2>
<p>After creating the boot device, you can connected to a physical server or
in the case of a bootable image add it to a virtual machine configuration.</p>
<p>Booting the system will show a boot menu letting you select a kernel. If
nothing is selected after the time-out period, a configured default
will be used.</p>
<h2 id="Adding+a+new+kernel" name="Adding+a+new+kernel">Adding a new kernel</h2>
<p>During normal operation, you may want to upgrade to a new kernel. To do this
you can use the script:</p>
<ul>
<li><code>$bootmedia/scripts/inst_iso.sh</code> <em>file-or-url</em></li>
</ul>
<p>You can pass a iso file name or the URL of an ISO over the network. If using
an URL, the script will download the image first.</p>
<p>This will prepare the environment to include the new kernel and add a menu
entry to boot the new kernel. The default kernel remains as the currently
running kernel.</p>
<p>At this point, the system can be rebooted. You can then use the
boot menu to select the latest kernel manually.</p>
<p>If the new kernel fails to boot succesfully, then you may need to hard boot
the system.</p>
<p>Since the default kernel was not changed, it should boot to the last working
kernel and proceed normally.</p>
<h2 id="After+a+succesful+reboot" name="After+a+succesful+reboot">After a succesful reboot</h2>
<p>If the new kernel was succesfully booted, you can use the command:</p>
<ul>
<li><code>$bootmedia/mkmenu.sh</code></li>
</ul>
<p>to make the currently running kernel the default.</p>
<h2 id="Alternative+workflow" name="Alternative+workflow">Alternative workflow</h2>
<p>Alternatively, after the new kernel is installed, you can use the command:</p>
<ul>
<li><code>$bootmedia/mkmenu.sh --latest</code></li>
</ul>
<p>to make the new kernel the default one. You can then re-boot the system
and if everything goes fine, you are done.</p>
<p>If the system fails to boot then you probably will need to do a hard boot of
the system and use the boot menu to select a working Alpine Linux kernel.</p>
<p>After a succesful boot, then you can use the command:</p>
<ul>
<li><code>$bootmedia/mkmenu.sh</code></li>
</ul>
<p>To make the current running kernel the default.</p>
<h2 id="Removing+installed+kernels" name="Removing+installed+kernels">Removing installed kernels</h2>
<p>Alpine Linux boot images are installed in their own folders. If you want
to remove a kernel, just remove the folder that you want to un-install and
run <code>mkemnu.sh</code> again to update the boot menu.</p>
<h2 id="Downloads" name="Downloads">Downloads</h2>
<p>The code can be found in my repository:</p>
<ul>
<li><a href="https://github.com/TortugaLabs/mab/tree/main/uub">github</a></li>
</ul>
<p>Alpine Linux boot images:</p>
<ul>
<li><a href="https://alpinelinux.org/downloads/">Alpine Linux ISO downloads</a></li>
</ul>
Adding static routes in alpine linux
urn:uuid:2dcf48e8-03e1-f7ed-8e62-2bcb17484f8f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>There are several ways to do this documented
in the alpine linux <a href="https://wiki.alpinelinux.org/wiki/How_to_configure_static_routes">wiki</a>.</p>
<p>My preferred way is to configure it in <code>/etc/network/interfaces</code>.</p>
<p>For example:</p>
<pre><code>auto eth0
iface eth0 inet static
address 192.168.0.1
netmask 255.255.255.0
up ip route add 10.14.0.0/16 via 192.168.0.2
up ip route add 192.168.100.0/23 via 192.168.0.3</code></pre>
<p>Note that you can actually add these lines in a <code>dhcp</code> stanza.</p>
<p>The benefit of doing this is that those routes are added when that interface
is brought up.</p>
<p>Also, they are kicked off by the <code>networking</code> init.d file.</p>
New NacoWiki
urn:uuid:6c53c921-5601-d4ca-8175-73d94791ad6c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Release new version (3.2.1) of <a href="https://github.com/iliu-net/NacoWiki">NacoWiki</a>.</p>
<p>The following changes are included:</p>
<ul>
<li>Added document properties</li>
<li>Added <code>opts.yaml</code></li>
<li>API improvements</li>
<li>Bug fixes and UI improvements</li>
<li>Additional Plugins:
<ul>
<li>Versions</li>
<li>AutoTag</li>
<li>Albatros : Blog site generator (similar to <a href="https://getpelican.com/">Pelican</a>).</li>
</ul></li>
</ul>
<p>Most of the changes are to support the new plugins.</p>
<h2 id="Versions" name="Versions">Versions</h2>
<p>Now, <a href="https://github.com/iliu-net/NacoWiki">NacoWiki</a> keeps track of previous versions of articles.
This can be disabled on a per directory basis by creating an
<code>opts.yaml</code> and adding the <code>disable-props: true</code> entry.</p>
<h2 id="AutoTag" name="AutoTag">AutoTag</h2>
<p>When an article is save it will create tags automatically based on
a <code>tagcloud.md</code> file containing a list of <em>tagging</em> words. This
is enabled only if there is a <code>tagcloud.md</code> file in the sub-directory
or any parent sub-directory.</p>
<h2 id="Albatros" name="Albatros">Albatros</h2>
<p>This is a Blog site generator inspired by <a href="https://getpelican.com/">Pelican</a>. It was
mainly written to migrate this web site from <a href="https://getpelican.com/">Pelican</a> which
uses a Python based markdown implementation to the same markdown
implementation used in <a href="https://github.com/iliu-net/NacoWiki">NacoWiki</a>. The reason being that
I use <a href="https://github.com/iliu-net/NacoWiki">NacoWiki</a> to edit this Blog, so it makes sense to use
the same code to preview and generate the static web site.</p>
<p>It uses a slightly modified version of the template I was using
for this website, so the change should be transparent to most
peope.</p>
Migrated to NacoWiki Albatros
urn:uuid:d93e2b36-16a3-47f0-50b6-1a16f1f6adff
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This edition marks the migration of my Blog from
<a href="https://getpelican.com/">Pelican</a> to <a href="https://github.com/iliu-net/nanowiki">NacoWiki</a> Albatros.</p>
<p>This is a mostly transparent change. I wrote Albatros specifically
to migrate this web site from <a href="https://getpelican.com/">Pelican</a> to a php based markdown
implementation. As such, it uses a slightly modified version
of the <a href="https://getpelican.com/">Pelican</a> theme.</p>
<p>The main changes from the previous site are:</p>
<ul>
<li>Re-arranged file structure</li>
<li>Built-in search functionality. This no longer depends on duck-duck-go.</li>
<li>Uses the same renderer as <a href="https://github.com/iliu-net/nanowiki">NacoWiki</a>.</li>
</ul>
File system encryption in Alpine Linux
urn:uuid:e63b4887-cd01-9846-0e64-009f94fb8012
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is similar to my previous article
<a href="/posts/2019/2019-02-28-encrypting-fs-in-void.html">Encrypting Filesystem in Void Linux</a>
but for Alpine Linux</p>
<p>The point of this recipe is to create a encrypted file sytem
so that when the disc is disposed, it does not need to be
securely erased. This is particularly important for SSD devices
since because of block remapping (for wear levelling) data can't
be overwritten consistently.</p>
<p>The idea is that the boot/root filesystem containing the encryption
keys are stored in a different device as the encrypted file system.</p>
<h2 id="Create+block+devices" name="Create+block+devices">Create block devices</h2>
<pre><code class="language-bash">apk add lvm2 cryptsetup
dd if=/dev/urandom of=/etc/crypto_keyfile.bin bs=1024 count=4
chmod 000 /etc/crypto_keyfile.bin
cryptsetup luksFormat /dev/xda2 /etc/crypto_keyfile.bin
cryptsetup luksOpen --key-file=/etc/crypto_keyfile.bin /dev/xda2 crypt-pool
</code></pre>
<p>Look up the UUID</p>
<pre><code class="language-bash">blkid /dev/xda2</code></pre>
<p>Edit <code>dmcrypt</code> service configuration in <code>/etc/conf.d/dmcrypt</code>:</p>
<pre><code class="language-bash">target=crypt-pool
source="UUID=xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
key=/etc/crypto_keyfile.bin
</code></pre>
<p>And enable it with this command:</p>
<pre><code>rc-update add dmcrypt boot
</code></pre>
<p>At this point it would be good to backup:</p>
<ul>
<li><code>/etc/conf.d/dmcrypt</code></li>
<li><code>/etc/crypto_keyfile.bin</code></li>
</ul>
<p>Reboot and make sure that the block device gets created on start-up.</p>
<p>Add to LVM:</p>
<pre><code class="language-bash">vgcreate pool /dev/mapper/crypt-pool
lvcreate --name home0 -L 20G pool
</code></pre>
<p>Create your file-system and add it to <code>/etc/fstab</code>.</p>
CSS Tips
urn:uuid:df3cf520-8b59-7460-4951-523b0b2e583e
2024-03-05T00:00:00+01:00
Alejandro Liu
<div id="toc"><ul>
<li><a href="#Style+editor">Style editor</a></li>
<li><a href="#Sending+query+string+with+PHP">Sending query string with PHP</a></li>
<li><a href="#Position+sticky">Position sticky</a></li>
<li><a href="#Flexbox+and+Grid">Flexbox and Grid</a></li>
</ul></div>
<hr />
<p>Personally, I dislike CSS. I find it difficult to use because of the lacking
documentation and the fact that different Browsers implement things subtly
different. Unfortunately, it is something that one has to use today
in order to create funtioning web sites, and things are getting
better all the time.</p>
<p>Some things that I have learned over time:</p>
<h2 id="Style+editor" name="Style+editor">Style editor</h2>
<p>A feature of Firefox, let's me edit and try CSS styles on the fly. I find it very
conventient.</p>
<p>To access it:</p>
<ol>
<li>Open the Firefox menu</li>
<li>More tools</li>
<li>Developer tools</li>
<li>Click on the <code>Style Editor</code> tab.</li>
</ol>
<p>In addition, Firefox comes with a "Responsive Design" mode that let's you simulate the
form factor of different devices. Granted, you can do this by just resizing the window
but this is very convenient.</p>
<h2 id="Sending+query+string+with+PHP" name="Sending+query+string+with+PHP">Sending query string with PHP</h2>
<p>Normally, web browsers would cache JS and CSS files for performance. For normal
usage, this is the best approach. For developing, this is a bit inconveneint
as you may be debugging problems that may have been fixed, but the web browser
is still using buggy versions of resource files.</p>
<p>To get around this, you can code your URLs to include a query string so that
you can change the query string every time you make changes to resources.</p>
<p>In PHP, what I do, is add to the resource URL, a query string containing the
modification time of the file. That way, when I update a resource file, it
the query string changes automatically, so the cache expires and renders the
latest version.</p>
<h2 id="Position+sticky" name="Position+sticky">Position sticky</h2>
<p>Very often you would like to have a header part of the page to be
always displayed. I create this effect with the following CSS:</p>
<pre><code class="language-css">css-selector {
position: sticky;
top: 0;
z-index: 100;
</code></pre>
<p>The important setting is the <code>posistion: sticky</code> which makes that element stick
to the window. The <code>top: 0</code> positions the element to the top of the page.
Finally, the <code>z-index: 100</code> makes it so that it will be rendered above other elements
rather than being obscured by them.</p>
<h2 id="Flexbox+and+Grid" name="Flexbox+and+Grid">Flexbox and Grid</h2>
<p>These are layout managers that can be used for arranging the page. For
navigation bars, I prefer to use flexbox. Whereas for table like layouts, Grid is
usually a good option.</p>
<p>You should consider using grid layout when: </p>
<ul>
<li>You have a complex design to work with and want maintainable web pages</li>
<li>You want to add gaps over the block elements</li>
</ul>
<p>You should consider using flexbox when:</p>
<ul>
<li>You have a small design to work with a few rows and columns</li>
<li>You need to align the element</li>
<li>You don’t know how your content will look on the page, and you want everything to fit in.</li>
</ul>
Local Startup
urn:uuid:b01d0c3c-8d43-5093-b618-ab59cd25b545
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a method to control start-up of applications in a Linux Desktop session
that are run by a local default configuration, but can also be overriden by the user.</p>
<p>This is unlike the <code>/etc/xdg/autostart</code> which is mostly under the control of the
distro packager.</p>
<p>Aslo unlike the <code>/etc/X11/profile.d</code> directory, this runs inside the Desktop Session.
<code>/etc/X11/profile.d</code> gets started <em>before</em> the Desktop session is available.</p>
<p>There is a configuration <code>/etc/xdg/local-startup.cfg</code>, which contains the local
configuration. It is a text file:</p>
<pre><code># comments start with #
#
delay 3000 # Number of milliseconds to wait before starting applications
run-cmd /etc/xdg/local-startup.run # Run script
application1.desktop enable # Enable the application
application2.dekstop disable # disable this application
</code></pre>
<p>Applications that can be auto started are defined either in <code>/usr/share/applications</code>
or in <code>$HOME/.local/share/applications</code> as <code>.desktop</code> files.</p>
<p>Adding a <em>enabled</em> application global config, will start the application by default.</p>
<p>Adding a <em>disabled</em> application to the global config, will <em>not</em> start the application
by default, but it makes it possible for the user to override this setting
in their <code>$HOME/.config/local-startup.cfg</code> file.</p>
<p>Files:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/local-startup/local-startup.tk">local-startup.tk</a>
: main implementation script</li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/local-startup/local.cfg">local.cfg</a> :
example configuration file</li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/local-startup/local-startup-autostart.desktop">local-startup-autostart.desktop</a> :
desktop file to run when session starts</li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/local-startup/local-startup-prefs.desktop">local-startup-prefs.desktop</a>
: desktop file for preferences editor</li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/local-startup/local-startup.run">local-startup.run</a> :
example run file.</li>
</ul>
<h2 id="local-startup.run" name="local-startup.run">local-startup.run</h2>
<p>This is an arbitrary script that can be run on boot-up after the desktop
environment is loaded. It is meant to run some scripts that can be used
to tweak the User Experiance. Currently it does:</p>
<ul>
<li>Closes the pidgin window (that always open regardless)</li>
<li>Monitors windows that have the size of a maximized window but are not
maximized. These windows still have window re-size grabbers which I
personally find annoying.</li>
</ul>
Pulse Audio control in python
urn:uuid:7e2ce5e5-ceeb-0ba8-5eb7-a62eb13d37c8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I have been using a <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2020/pa-hints/patoggle">shell script</a> to toggle pulse audio sinks for some time. It worked well enough for
switching output among several profiles on a single audio card. I recently upgraded
my set-up to new hardware. This hardware for some reason, reported the analog stereo output and
the digital HDMI output as different sound cards. So my patoggle script did not work well enough
anymore.</p>
<p>Since parsing the output of the <code>pacmd</code> in shell script was becoming a pain, I decided to re-write
the toggling script in <code>python</code>. The new script is <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2020/pa-hints/patoggle.py">here</a>.</p>
<p>This script depends of two packages:</p>
<ul>
<li><a href="https://pypi.org/project/pulsectl/">pulsectl</a> : used to control pulse audio</li>
<li><a href="https://docs.python.org/3/library/tkinter.html">tkinter</a> : used to show info on screen</li>
</ul>
<p>It can be used to control volume and switch audio output.</p>
Global hotkeys
urn:uuid:2046ad26-87b3-8de9-d2d9-f63006409844
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>To make it easier to switch desktop environment I am using
a Desktop Environment independant hot keys configuration using
<a href="https://www.nongnu.org/xbindkeys/">xbindkeys</a>. This lets me use the same
keybindings on different Window managers and Desktop Environments.</p>
<p>This code can be found in <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2023/global-hotkeys">github</a>.</p>
<p>Included are the follwoing:</p>
<ul>
<li>hk_helper : a bash or a tcl/tk implementations. The latest
version is based on tcl/tk.</li>
<li>xbindkeysrc : hotkey configuration file</li>
<li>xbindkeys.desktop : <code>/etc/xdg/startup</code> file.</li>
<li><em>obsolete</em> profile.sh : <code>/etc/X11/profiles.d</code> file (not used).</li>
</ul>
<p>The script starts from <code>/etc/xdg/startup</code>, so as to make sure to
let the desktop environment work and grab as many keys as possible.</p>
<p>Afterwards, if there is a <code>$HOME/.xbindkeysrc</code> it will start it first.
This is to allow home user keys to work.</p>
<p>Finally it will use the global <code>xbindkeysrc</code> file.</p>
<p>Defined hotkeys can be seen in the <code>xbindkeysrc</code> config file here:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/global-hotkeys/xbindkeysrc"></script>
XPrinterMgr
urn:uuid:7910744e-e86c-46d6-e3e7-6ad01c6c11c6
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a small utility to manage a local printer(s) in
a home office setting.</p>
<p>It will show the state of the printer:</p>
<ul>
<li>accepting or rejecting jobs</li>
<li>enable or disabled printing</li>
</ul>
<p>And if <code>sudo</code> is configured correctly would allow users to:</p>
<ul>
<li>enable or disable printing</li>
<li>accept or reject new print jobs</li>
<li>print a test page</li>
<li>Show device info. It would try to start <code>hp-toolbox</code> if relevant.</li>
</ul>
<p>It also let's you manage the local printer job queue. Allowing you to:</p>
<ul>
<li>show what jobs are in the queue</li>
<li>cancel a job</li>
<li>cancel all job</li>
</ul>
<p>It is not intended to replace the CUPS WebUI. It is just a simple
gui to handle the most common cases for a home office allowing
end-users some simple control of printer management functions.</p>
<p>It makes use of:</p>
<ul>
<li><a href="https://github.com/OpenPrinting/pycups">pycups</a></li>
<li><a href="https://docs.python.org/3/library/tkinter.html">python tkinter</a></li>
<li><a href="https://github.com/gnikit/tkinter-tooltip">tkinter-tooltip</a></li>
<li><a href="https://icons8.com/">icons8</a></li>
</ul>
<p>The code is in <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2023/xprtmgr">github</a>.</p>
rsync filter rules
urn:uuid:cf0e9fd3-2c3a-bdf9-a9e9-8c3905a188ee
2024-03-05T00:00:00+01:00
Alejandro Liu
<div id="toc"><ul>
<li><a href="#Rules">Rules</a></li>
<li><a href="#Simple+include%2Fexclude+rules">Simple include/exclude rules</a></li>
<li><a href="#Simple+include%2Fexclude+example">Simple include/exclude example</a></li>
<li><a href="#Filter+rules+when+deleting">Filter rules when deleting</a></li>
<li><a href="#Filter+rules+in+depth">Filter rules in depth</a></li>
<li><a href="#Pattern+matching+rules">Pattern matching rules</a></li>
<li><a href="#Filter+rule+modifiers">Filter rule modifiers</a></li>
<li><a href="#Merge-file+filter+rules">Merge-file filter rules</a></li>
<li><a href="#List-clearing+filter+rule">List-clearing filter rule</a></li>
<li><a href="#Anchoring+include%2Fexclude+patterns">Anchoring include/exclude patterns</a></li>
<li><a href="#Per-directory+rules+and+delete">Per-directory rules and delete</a></li>
</ul></div>
<p>The basic options for filtering files in rsync are:</p>
<ul>
<li><code>--exclude=PATTERN</code> <br />This option pecifies an exclude rule.</li>
<li><code>--exclude-from=FILE</code>
This option is related to the <code>--exclude</code> option, but it
specifies a <em>FILE</em> that contains exclude patterns (one per
line). Blank lines in the file are ignored, as are whole-
line comments that start with ';' or '#' (filename rules
that contain those characters are unaffected). <br />If a line consists of just "!", then the current filter
rules are cleared before adding any further rules. <br />If FILE is '-', the list will be read from standard input.</li>
<li><code>--include=PATTERN</code> <br />This option specifies an include rule.</li>
<li><code>--include-from=FILE</code>
This option is related to the <code>--include</code> option, but it
specifies a <em>FILE</em> that contains include patterns (one per
line). Blank lines in the file are ignored, as are whole-
line comments that start with ';' or '#' (filename rules
that contain those characters are unaffected).<br />If a line consists of just "!", then the current filter
rules are cleared before adding any further rules.<br />If FILE is '-', the list will be read from standard input.</li>
<li><code>--cvs-exclude</code>, <code>-C</code><br />This is a useful shorthand for excluding a broad range of
files that you often don't want to transfer between
systems. It uses a similar algorithm to CVS to determine
if a file should be ignored.<br />The exclude list is initialized to exclude the following
items (these initial items are marked as perishable:
<pre><code>RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old
*.bak *.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj
*.so *.exe *.Z *.elc *.ln core .svn/ .git/ .hg/ .bzr/</code></pre>
<p>then, files listed in a <code>$HOME/.cvsignore</code> are added to the
list and any files listed in the <code>CVSIGNORE</code> environment
variable (all cvsignore names are delimited by
whitespace).<br />Finally, any file is ignored if it is in the same
directory as a <code>.cvsignore</code> file and matches one of the
patterns listed therein. Unlike rsync's filter/exclude
files, these patterns are split on whitespace.<br />If you're combining <code>-C</code> with your own <code>--filter</code> rules, you
should note that these CVS excludes are appended at the
end of your own rules, regardless of where the <code>-C</code> was
placed on the command-line. This makes them a lower
priority than any rules you specified explicitly. If you
want to control where these CVS excludes get inserted into
your filter rules, you should omit the <code>-C</code> as a command-
line option and use a combination of <code>--filter=:C</code> and
<code>--filter=-C</code> (either on your command-line or by putting the
<code>":C"</code> and <code>"-C"</code> rules into a filter file with your other
rules). The first option turns on the per-directory
scanning for the <code>.cvsignore</code> file. The second option does
a one-time import of the CVS excludes mentioned above.</p></li>
</ul>
<p>Rsync supports old-style include/exclude rules and new-style
filter rules. The older rules are specified using <code>--include</code> and
<code>--exclude</code> as well as the <code>--include-from</code> and <code>--exclude-from</code>. These
are limited in behavior but they don't require a <code>"-"</code> or <code>"+"</code>
prefix. An old-style exclude rule is turned into a <code>"- name"</code>
filter rule (with no modifiers) and an old-style include rule is
turned into a <code>"+ name"</code> filter rule (with no modifiers).</p>
<p>For more control on included/excluded files you should use
these options:</p>
<ul>
<li><code>--filter=RULE</code>, <code>-f</code> <br />This option allows you to add rules to selectively exclude
certain files from the list of files to be transferred.
This is most useful in combination with a recursive
transfer.<br />You may use as many <code>--filter</code> options on the command line
as you like to build up the list of files to exclude. If
the filter contains whitespace, be sure to quote it so
that the shell gives the rule to rsync as a single
argument. The text below also mentions that you can use
an underscore to replace the space that separates a rule
from its arg.</li>
<li><code>-F</code> <br />The <code>-F</code> option is a shorthand for adding two <code>--filter</code> rules
to your command. The first time it is used is a shorthand
for this rule:
<pre><code>--filter='dir-merge /.rsync-filter'</code></pre>
<p>This tells rsync to look for per-directory <code>.rsync-filter</code>
files that have been sprinkled through the hierarchy and
use their rules to filter the files in the transfer. If
<code>-F</code> is repeated, it is a shorthand for this rule:</p>
<pre><code>--filter='exclude .rsync-filter'</code></pre>
<p>This filters out the <code>.rsync-filter</code> files themselves from
the transfer.</p></li>
</ul>
<h2 id="Rules" name="Rules">Rules</h2>
<p>The filter rules allow for custom control of several aspects of
how files are handled:</p>
<ul>
<li>Control which files the sending side puts into the file
list that describes the transfer hierarchy</li>
<li>Control which files the receiving side protects from
deletion when the file is not in the sender's file list</li>
<li>Control which extended attribute names are skipped when
copying xattrs</li>
</ul>
<p>The rules are either directly specified via option arguments or
they can be read in from one or more files. The filter-rule
files can even be a part of the hierarchy of files being copied,
affecting different parts of the tree in different ways.</p>
<h2 id="Simple+include%2Fexclude+rules" name="Simple+include%2Fexclude+rules">Simple include/exclude rules</h2>
<p>We will first cover the basics of how include & exclude rules
affect what files are transferred, ignoring any deletion side-
effects. Filter rules mainly affect the contents of directories
that rsync is "recursing" into, but they can also affect a top-level
item in the transfer that was specified as a argument.</p>
<p>The default for any unmatched file/dir is for it to be included
in the transfer, which puts the file/dir into the sender's file
list. The use of an exclude rule causes one or more matching
files/dirs to be left out of the sender's file list. An include
rule can be used to limit the effect of an exclude rule that is
matching too many files.</p>
<p>The order of the rules is important because the first rule that
matches is the one that takes effect. Thus, if an early rule
excludes a file, no include rule that comes after it can have any
effect. This means that you must place any include overrides
somewhere prior to the exclude that it is intended to limit.</p>
<p>When a directory is excluded, all its contents and sub-contents
are also excluded. The sender doesn't scan through any of it at
all, which can save a lot of time when skipping large unneeded
sub-trees.</p>
<p>It is also important to understand that the include/exclude rules
are applied to every file and directory that the sender is
recursing into. Thus, if you want a particular deep file to be
included, you have to make sure that none of the directories that
must be traversed on the way down to that file are excluded or
else the file will never be discovered to be included. As an
example, if the directory "a/path" was given as a transfer
argument and you want to ensure that the file
"a/path/down/deep/wanted.txt" is a part of the transfer, then the
sender must not exclude the directories "a/path", "a/path/down",
or "a/path/down/deep" as it makes it way scanning through the
file tree.</p>
<p>When you are working on the rules, it can be helpful to ask rsync
to tell you what is being excluded/included and why. Specifying
<code>--debug=FILTER</code> or (when pulling files) <code>-M--debug=FILTER</code> turns on
level 1 of the <code>FILTER</code> debug information that will output a
message any time that a file or directory is included or excluded
and which rule it matched. Beginning in 3.2.4 it will also warn
if a filter rule has trailing whitespace, since an exclude of
"foo " (with a trailing space) will not exclude a file named
"foo".</p>
<p>Exclude and include rules can specify wildcard <strong>PATTERN MATCHING RULES</strong>
(similar to shell wildcards) that allow you to match things
like a file suffix or a portion of a filename.</p>
<p>A rule can be limited to only affecting a directory by putting a
trailing slash onto the filename.</p>
<h2 id="Simple+include%2Fexclude+example" name="Simple+include%2Fexclude+example">Simple include/exclude example</h2>
<p>With the following file tree created on the sending side:</p>
<pre><code>mkdir x/
touch x/file.txt
mkdir x/y/
touch x/y/file.txt
touch x/y/zzz.txt
mkdir x/z/
touch x/z/file.txt</code></pre>
<p>Then the following rsync command will transfer the file
<code>"x/y/file.txt"</code> and the directories needed to hold it, resulting
in the path <code>"/tmp/x/y/file.txt"</code> existing on the remote host:</p>
<pre><code>rsync -ai -f'+ x/' -f'+ x/y/' -f'+ x/y/file.txt' -f'- *' x host:/tmp/</code></pre>
<p>Aside: this copy could also have been accomplished using the <code>-R</code>
option (though the two commands behave differently if deletions are
enabled):</p>
<pre><code>rsync -aiR x/y/file.txt host:/tmp/</code></pre>
<p>The following command does not need an include of the <code>"x"</code>
directory because it is not a part of the transfer (note the
traililng slash). Running this command would copy just
<code>"/tmp/x/file.txt"</code> because the <code>"y"</code> and <code>"z"</code> dirs get excluded:</p>
<pre><code>rsync -ai -f'+ file.txt' -f'- *' x/ host:/tmp/x/</code></pre>
<p>This command would omit the <code>zzz.txt</code> file while copying <code>"x"</code> and
everything else it contains:</p>
<pre><code>rsync -ai -f'- zzz.txt' x host:/tmp/</code></pre>
<h2 id="Filter+rules+when+deleting" name="Filter+rules+when+deleting">Filter rules when deleting</h2>
<p>By default the include & exclude filter rules affect both the
sender (as it creates its file list) and the receiver (as it
creates its file lists for calculating deletions). If no delete
option is in effect, the receiver skips creating the delete-
related file lists. This two-sided default can be manually
overridden so that you are only specifying sender rules or
receiver rules, as described in the
<strong>FILTER RULES IN DEPTH</strong>
section.</p>
<p>When deleting, an exclude protects a file from being removed on
the receiving side while an include overrides that protection
(putting the file at risk of deletion). The default is for a file
to be at risk -- its safety depends on it matching a
corresponding file from the sender.</p>
<p>An example of the two-sided exclude effect can be illustrated by
the copying of a C development directory between two systems. When
doing a touch-up copy, you might want to skip copying the built
executable and the .o files (sender hide) so that the receiving
side can build their own and not lose any object files that are
already correct (receiver protect). For instance:</p>
<pre><code>rsync -ai --del -f'- *.o' -f'- cmd' src host:/dest/</code></pre>
<p>Note that using <code>-f'-p *.o'</code> is even better than <code>-f'- *.o'</code> if there
is a chance that the directory structure may have changed. The
<code>"p"</code> modifier is discussed in <strong>FILTER RULE MODIFIERS</strong>.</p>
<p>One final note, if your shell doesn't mind unexpanded wildcards,
you could simplify the typing of the filter options by using an
underscore in place of the space and leaving off the quotes. For
instance, <code>-f -_*.o -f -_cmd</code> (and similar) could be used instead
of the filter options above.</p>
<h2 id="Filter+rules+in+depth" name="Filter+rules+in+depth">Filter rules in depth</h2>
<p>Rsync builds an ordered list of filter rules as specified on the
command-line and/or read-in from files. New style filter rules
have the following syntax:</p>
<pre><code>RULE [PATTERN_OR_FILENAME]
RULE,MODIFIERS [PATTERN_OR_FILENAME]</code></pre>
<p>You have your choice of using either short or long <em>RULE</em> names, as
described below. If you use a short-named rule, the <code>','</code>
separating the <em>RULE</em> from the <em>MODIFIERS</em> is optional. The <em>PATTERN</em>
or <em>FILENAME</em> that follows (when present) must come after either a
single space or an underscore (_). Any additional spaces and/or
underscores are considered to be a part of the pattern name.
Here are the available rule prefixes:</p>
<ul>
<li><code>exclude, '-'</code> <br />specifies an exclude pattern that (by default) is both a
hide and a protect.</li>
<li><code>include, '+'</code> <br />specifies an include pattern that (by default) is both a
show and a risk.</li>
<li><code>merge, '.'</code> <br />specifies a merge-file on the client side to read for more
rules.</li>
<li><code>dir-merge, ':'</code> <br />specifies a per-directory merge-file. Using this kind of
filter rule requires that you trust the sending side's
filter checking, so it has the side-effect mentioned under
the <code>--trust-sender</code> option.</li>
<li><code>hide, 'H'</code> <br />specifies a pattern for hiding files from the transfer.
Equivalent to a sender-only exclude, so <code>-f'H foo'</code> could
also be specified as <code>-f'-s foo'</code>.</li>
<li><code>show, 'S'</code> <br />files that match the pattern are not hidden. Equivalent to
a sender-only include, so <code>-f'S foo'</code> could also be
specified as <code>-f'+s foo'</code>.</li>
<li><code>protect, 'P'</code> <br />specifies a pattern for protecting files from deletion.
Equivalent to a receiver-only exclude, so <code>-f'P foo'</code> could
also be specified as <code>-f'-r foo'</code>.</li>
<li><code>risk, 'R'</code> <br />files that match the pattern are not protected. Equivalent
to a receiver-only include, so <code>-f'R foo'</code> could also be
specified as <code>-f'+r foo'</code>.</li>
<li><code>clear, '!'</code>
clears the current include/exclude list (takes no arg)</li>
</ul>
<p>When rules are being read from a file (using <code>merge</code> or <code>dir-merge</code>),
empty lines are ignored, as are whole-line comments that start
with a <code>'#'</code> (filename rules that contain a hash character are
unaffected).</p>
<p>Note also that the <code>--filter</code>, <code>--include</code>, and <code>--exclude</code> options
take one rule/pattern each. To add multiple ones, you can repeat
the options on the command-line, use the merge-file syntax of the
<code>--filter</code> option, or the <code>--include-from</code> / <code>--exclude-from</code> options.</p>
<h2 id="Pattern+matching+rules" name="Pattern+matching+rules">Pattern matching rules</h2>
<p>Most of the rules mentioned above take an argument that specifies
what the rule should match. If rsync is recursing through a
directory hierarchy, keep in mind that each pattern is matched
against the name of every directory in the descent path as rsync
finds the filenames to send.</p>
<p>The matching rules for the pattern argument take several forms:</p>
<ul>
<li>If a pattern contains a <code>/</code> (not counting a trailing slash)
or a <code>"**"</code> (which can match a slash), then the pattern is
matched against the full pathname, including any leading
directories within the transfer. If the pattern doesn't
contain a (non-trailing) <code>/</code> or a <code>"**"</code>, then it is matched
only against the final component of the filename or
pathname. For example, <code>foo</code> means that the final path
component must be <code>"foo"</code> while <code>foo/bar</code> would match the last
2 elements of the path (as long as both elements are
within the transfer).</li>
<li>A pattern that ends with a <code>/</code> only matches a directory, not
a regular file, symlink, or device.</li>
<li>A pattern that starts with a <code>/</code> is anchored to the start of
the transfer path instead of the end. For example,
/foo/<strong> or /foo/bar/</strong> match only leading elements in the
path. If the rule is read from a per-directory filter
file, the transfer path being matched will begin at the
level of the filter file instead of the top of the
transfer. See the section on
<strong>ANCHORING INCLUDE/EXCLUDE PATTERNS</strong>
for a full discussion of how to specify a pattern
that matches at the root of the transfer.</li>
</ul>
<p>Rsync chooses between doing a simple string match and wildcard
matching by checking if the pattern contains one of these three
wildcard characters: '*', '?', and '[' :</p>
<ul>
<li>a <code>'?'</code> matches any single character except a slash (<code>/</code>).</li>
<li>a <code>'*'</code> matches zero or more non-slash characters.</li>
<li>a <code>'**'</code> matches zero or more characters, including slashes.</li>
<li>a <code>'['</code> introduces a character class, such as <code>[a-z]</code> or
<code>[[:alpha:]]</code>, that must match one character.</li>
<li>a trailing <code>***</code> in the pattern is a shorthand that allows
you to match a directory and all its contents using a
single rule. For example, specifying <code>"dir_name/***"</code> will
match both the <code>"dir_name"</code> directory (as if <code>"dir_name/"</code> had
been specified) and everything in the directory (as if
<code>"dir_name/**"</code> had been specified).</li>
<li>a backslash can be used to escape a wildcard character,
but it is only interpreted as an escape character if at
least one wildcard character is present in the match
pattern. For instance, the pattern <code>"foo\bar"</code> matches that
single backslash literally, while the pattern <code>"foo\bar*"</code>
would need to be changed to <code>"foo\\bar*"</code> to avoid the <code>"\b"</code>
becoming just <code>"b"</code>.</li>
</ul>
<p>Here are some examples of exclude/include matching:</p>
<ul>
<li>Option <code>-f'- *.o'</code> would exclude all filenames ending with
<code>.o</code></li>
<li>Option <code>-f'- /foo'</code> would exclude a file (or directory)
named <code>foo</code> in the transfer-root directory</li>
<li>Option <code>-f'- foo/'</code> would exclude any directory named <code>foo</code></li>
<li>Option <code>-f'- foo/*/bar'</code> would exclude any file/dir named
bar which is at two levels below a directory named <code>foo</code> (if
<code>foo</code> is in the transfer)</li>
<li>Option <code>-f'- /foo/**/bar'</code> would exclude any file/dir named
<code>bar</code> that was two or more levels below a top-level
directory named <code>foo</code> (note that <code>/foo/bar</code> is not excluded by
this)</li>
<li>Options <code>-f'+ */'</code> <code>-f'+ *.c'</code> <code>-f'- *'</code> would include all
directories and <code>.c</code> source files but nothing else</li>
<li>Options <code>-f'+ foo/'</code> <code>-f'+ foo/bar.c'</code> <code>-f'- *'</code> would include
only the <code>foo</code> directory and <code>foo/bar.c</code> (the <code>foo</code> directory
must be explicitly included or it would be excluded by the
<code>"- *"</code>)</li>
</ul>
<h2 id="Filter+rule+modifiers" name="Filter+rule+modifiers">Filter rule modifiers</h2>
<p>The following modifiers are accepted after an include <code>(+)</code> or
exclude <code>(-)</code> rule:</p>
<ul>
<li>A <code>/</code> specifies that the include/exclude rule should be
matched against the absolute pathname of the current item.
For example, <code>-f'-/ /etc/passwd'</code> would exclude the <code>passwd</code>
file any time the transfer was sending files from the
<code>"/etc"</code> directory, and <code>"-/ subdir/foo"</code> would always exclude
<code>"foo"</code> when it is in a dir named <code>"subdir"</code>, even if <code>"foo"</code> is
at the root of the current transfer.</li>
<li>A <code>!</code> specifies that the include/exclude should take effect
if the pattern fails to match. For instance, <code>-f'-! */'</code>
would exclude all non-directories.</li>
<li>A <code>C</code> is used to indicate that all the global CVS-exclude
rules should be inserted as excludes in place of the <code>"-C"</code>.
No arg should follow.</li>
<li>An <code>s</code> is used to indicate that the rule applies to the
sending side. When a rule affects the sending side, it
affects what files are put into the sender's file list.
The default is for a rule to affect both sides unless
<code>--delete-excluded</code> was specified, in which case default
rules become sender-side only. See also the hide <code>(H)</code> and
show <code>(S)</code> rules, which are an alternate way to specify
sending-side includes/excludes.</li>
<li>An <code>r</code> is used to indicate that the rule applies to the
receiving side. When a rule affects the receiving side,
it prevents files from being deleted. See the <code>s</code> modifier
for more info. See also the protect <code>(P)</code> and risk <code>(R)</code>
rules, which are an alternate way to specify receiver-side
includes/excludes.</li>
<li>A <code>p</code> indicates that a rule is perishable, meaning that it
is ignored in directories that are being deleted. For
instance, the <code>--cvs-exclude</code> <code>(-C)</code> option's default rules
that exclude things like <code>"CVS"</code> and <code>"*.o"</code> are marked as
perishable, and will not prevent a directory that was
removed on the source from being deleted on the
destination.</li>
<li>An <code>x</code> indicates that a rule affects <code>xattr</code> names in <code>xattr</code>
copy/delete operations (and is thus ignored when matching
file/dir names). If no xattr-matching rules are
specified, a default xattr filtering rule is used.</li>
</ul>
<h2 id="Merge-file+filter+rules" name="Merge-file+filter+rules">Merge-file filter rules</h2>
<p>You can merge whole files into your filter rules by specifying
either a <code>merge</code> (.) or a <code>dir-merge</code> (:) filter rule (as introduced
in the <strong>FILTER RULES</strong> section above).</p>
<p>There are two kinds of merged files -- single-instance ('.') and
per-directory ('😂. A single-instance merge file is read one
time, and its rules are incorporated into the filter list in the
place of the "." rule. For per-directory merge files, rsync will
scan every directory that it traverses for the named file,
merging its contents when the file exists into the current list
of inherited rules. These per-directory rule files must be
created on the sending side because it is the sending side that
is being scanned for the available files to transfer. These rule
files may also need to be transferred to the receiving side if
you want them to affect what files don't get deleted (see
<strong>PER-DIRECTORY RULES AND DELETE</strong>
below).</p>
<p>Some examples:</p>
<pre><code>merge /etc/rsync/default.rules
. /etc/rsync/default.rules
dir-merge .per-dir-filter
dir-merge,n- .non-inherited-per-dir-excludes
:n- .non-inherited-per-dir-excludes</code></pre>
<p>The following modifiers are accepted after a merge or dir-merge
rule:</p>
<ul>
<li>A <code>-</code> specifies that the file should consist of only exclude
patterns, with no other rule-parsing except for in-file
comments.</li>
<li>A <code>+</code> specifies that the file should consist of only include
patterns, with no other rule-parsing except for in-file
comments.</li>
<li>A <code>C</code> is a way to specify that the file should be read in a
CVS-compatible manner. This turns on <code>'n'</code>, <code>'w'</code>, and <code>'-'</code>,
but also allows the list-clearing token <code>(!)</code> to be
specified. If no filename is provided, <code>".cvsignore"</code> is
assumed.</li>
<li>A <code>e</code> will exclude the merge-file name from the transfer;
e.g. <code>"dir-merge,e .rules"</code> is like <code>"dir-merge .rules"</code> and
<code>"- .rules".</code></li>
<li>An <code>n</code> specifies that the rules are not inherited by
subdirectories.</li>
<li>A <code>w</code> specifies that the rules are word-split on whitespace
instead of the normal line-splitting. This also turns off
comments. Note: the space that separates the prefix from
the rule is treated specially, so <code>"- foo + bar"</code> is parsed
as two rules (assuming that prefix-parsing wasn't also
disabled).</li>
<li>You may also specify any of the modifiers for the <code>"+"</code> or
<code>"-"</code> rules (above) in order to have the rules that are read
in from the file default to having that modifier set
(except for the <code>!</code> modifier, which would not be useful).
For instance, <code>"merge,-/ .excl"</code> would treat the contents of
<code>.excl</code> as absolute-path excludes, while <code>"dir-merge,s .filt"</code>
and <code>":sC"</code> would each make all their per-directory rules
apply only on the sending side. If the merge rule
specifies sides to affect (via the <code>s</code> or <code>r</code> modifier or
both), then the rules in the file must not specify sides
(via a modifier or a rule prefix such as hide).</li>
</ul>
<p>Per-directory rules are inherited in all subdirectories of the
directory where the merge-file was found unless the <code>'n'</code> modifier
was used. Each subdirectory's rules are prefixed to the
inherited per-directory rules from its parents, which gives the
newest rules a higher priority than the inherited rules. The
entire set of dir-merge rules are grouped together in the spot
where the merge-file was specified, so it is possible to override
dir-merge rules via a rule that got specified earlier in the list
of global rules. When the list-clearing rule <code>("!")</code> is read from
a per-directory file, it only clears the inherited rules for the
current merge file.</p>
<p>Another way to prevent a single rule from a dir-merge file from
being inherited is to anchor it with a leading slash. Anchored
rules in a per-directory merge-file are relative to the merge-
file's directory, so a pattern <code>"/foo"</code> would only match the file
<code>"foo"</code> in the directory where the dir-merge filter file was found.</p>
<p>Here's an example filter file which you'd specify via
<code>--filter=". file"</code>:</p>
<pre><code>merge /home/user/.global-filter
- *.gz
dir-merge .rules
+ *.[ch]
- *.o
- foo*</code></pre>
<p>This will merge the contents of the <code>/home/user/.global-filter</code>
file at the start of the list and also turns the <code>".rules"</code>
filename into a per-directory filter file. All rules read in
prior to the start of the directory scan follow the global
anchoring rules (i.e. a leading slash matches at the root of the
transfer).</p>
<p>If a per-directory merge-file is specified with a path that is a
parent directory of the first transfer directory, rsync will scan
all the parent dirs from that starting point to the transfer
directory for the indicated per-directory file. For instance,
here is a common filter (see -F):</p>
<pre><code>--filter=': /.rsync-filter'</code></pre>
<p>That rule tells rsync to scan for the file .rsync-filter in all
directories from the root down through the parent directory of
the transfer prior to the start of the normal directory scan of
the file in the directories that are sent as a part of the
transfer. (Note: for an rsync daemon, the root is always the same
as the module's "path".)</p>
<p>Some examples of this pre-scanning for per-directory files:</p>
<pre><code>rsync -avF /src/path/ /dest/dir
rsync -av --filter=': ../../.rsync-filter' /src/path/ /dest/dir
rsync -av --filter=': .rsync-filter' /src/path/ /dest/dir</code></pre>
<p>The first two commands above will look for <code>".rsync-filter"</code> in <code>"/"</code>
and <code>"/src"</code> before the normal scan begins looking for the file in
<code>"/src/path"</code> and its subdirectories. The last command avoids the
parent-dir scan and only looks for the <code>".rsync-filter"</code> files in
each directory that is a part of the transfer.</p>
<p>If you want to include the contents of a <code>".cvsignore"</code> in your
patterns, you should use the rule <code>":C"</code>, which creates a dir-merge
of the <code>.cvsignore file</code>, but parsed in a CVS-compatible manner.
You can use this to affect where the <code>--cvs-exclude</code> <code>(-C)</code> option's
inclusion of the per-directory .cvsignore file gets placed into
your rules by putting the <code>":C"</code> wherever you like in your filter
rules. Without this, rsync would add the dir-merge rule for the
<code>.cvsignore</code> file at the end of all your other rules (giving it a
lower priority than your command-line rules). For example:</p>
<pre><code>cat <<EOT | rsync -avC --filter='. -' a/ b
+ foo.o
:C
- *.old
EOT
rsync -avC --include=foo.o -f :C --exclude='*.old' a/ b</code></pre>
<p>Both of the above rsync commands are identical. Each one will
merge all the per-directory <code>.cvsignore</code> rules in the middle of the
list rather than at the end. This allows their dir-specific
rules to supersede the rules that follow the <code>:C</code> instead of being
subservient to all your rules. To affect the other CVS exclude
rules (i.e. the default list of exclusions, the contents of
<code>$HOME/.cvsignore</code>, and the value of <code>$CVSIGNORE</code>) you should omit
the <code>-C</code> command-line option and instead insert a <code>"-C"</code> rule into
your filter rules; e.g. <code>"--filter=-C"</code>.</p>
<h2 id="List-clearing+filter+rule" name="List-clearing+filter+rule">List-clearing filter rule</h2>
<p>You can clear the current include/exclude list by using the <code>"!"</code>
filter rule (as introduced in the
<strong>FILTER RULES</strong>
section above).
The <em>"current"</em> list is either the global list of rules (if the
rule is encountered while parsing the filter options) or a set of
per-directory rules (which are inherited in their own sub-list,
so a subdirectory can use this to clear out the parent's rules).</p>
<h2 id="Anchoring+include%2Fexclude+patterns" name="Anchoring+include%2Fexclude+patterns">Anchoring include/exclude patterns</h2>
<p>As mentioned earlier, global include/exclude patterns are
anchored at the "root of the transfer" (as opposed to per-
directory patterns, which are anchored at the merge-file's
directory). If you think of the transfer as a subtree of names
that are being sent from sender to receiver, the transfer-root is
where the tree starts to be duplicated in the destination
directory. This root governs where patterns that start with a /
match.</p>
<p>Because the matching is relative to the transfer-root, changing
the trailing slash on a source path or changing your use of the
<code>--relative</code> option affects the path you need to use in your
matching (in addition to changing how much of the file tree is
duplicated on the destination host). The following examples
demonstrate this.</p>
<p>Let's say that we want to match two source files, one with an
absolute path of <code>"/home/me/foo/bar"</code>, and one with a path of
<code>"/home/you/bar/baz"</code>. Here is how the various command choices
differ for a 2-source transfer:</p>
<pre><code>Example cmd: rsync -a /home/me /home/you /dest
+/- pattern: /me/foo/bar
+/- pattern: /you/bar/baz
Target file: /dest/me/foo/bar
Target file: /dest/you/bar/baz
Example cmd: rsync -a /home/me/ /home/you/ /dest
+/- pattern: /foo/bar (note missing "me")
+/- pattern: /bar/baz (note missing "you")
Target file: /dest/foo/bar
Target file: /dest/bar/baz
Example cmd: rsync -a --relative /home/me/ /home/you /dest
+/- pattern: /home/me/foo/bar (note full path)
+/- pattern: /home/you/bar/baz (ditto)
Target file: /dest/home/me/foo/bar
Target file: /dest/home/you/bar/baz
Example cmd: cd /home; rsync -a --relative me/foo you/ /dest
+/- pattern: /me/foo/bar (starts at specified path)
+/- pattern: /you/bar/baz (ditto)
Target file: /dest/me/foo/bar
Target file: /dest/you/bar/baz</code></pre>
<p>The easiest way to see what name you should filter is to just
look at the output when using <code>--verbose</code> and put a <code>/</code> in front of
the name (use the <code>--dry-run</code> option if you're not yet ready to
copy any files).</p>
<h2 id="Per-directory+rules+and+delete" name="Per-directory+rules+and+delete">Per-directory rules and delete</h2>
<p>Without a delete option, per-directory rules are only relevant on
the sending side, so you can feel free to exclude the merge files
themselves without affecting the transfer. To make this easy,
the <code>'e'</code> modifier adds this exclude for you, as seen in these two
equivalent commands:</p>
<pre><code>rsync -av --filter=': .excl' --exclude=.excl host:src/dir /dest
rsync -av --filter=':e .excl' host:src/dir /dest</code></pre>
<p>However, if you want to do a delete on the receiving side AND you
want some files to be excluded from being deleted, you'll need to
be sure that the receiving side knows what files to exclude. The
easiest way is to include the per-directory merge files in the
transfer and use <code>--delete-after</code>, because this ensures that the
receiving side gets all the same exclude rules as the sending
side before it tries to delete anything:</p>
<pre><code>rsync -avF --delete-after host:src/dir /dest</code></pre>
<p>However, if the merge files are not a part of the transfer,
you'll need to either specify some global exclude rules (i.e.
specified on the command line), or you'll need to maintain your
own per-directory merge files on the receiving side. An example
of the first is this (assume that the remote .rules files exclude
themselves):</p>
<pre><code>rsync -av --filter=': .rules' --filter='. /my/extra.rules' --delete host:src/dir /dest</code></pre>
<p>In the above example the extra.rules file can affect both sides
of the transfer, but (on the sending side) the rules are
subservient to the rules merged from the <code>.rules</code> files because
they were specified after the per-directory merge rule.</p>
<p>In one final example, the remote side is excluding the
<code>.rsync-filter</code> files from the transfer, but we want to use our own
<code>.rsync-filter</code> files to control what gets deleted on the receiving
side. To do this we must specifically exclude the per-directory
merge files (so that they don't get deleted) and then put rules
into the local files to control what else should not get deleted.
Like one of these commands:</p>
<pre><code>rsync -av --filter=':e /.rsync-filter' --delete host:src/dir /dest
rsync -avFF --delete host:src/dir /dest</code></pre>
Munin-tweaks
urn:uuid:61ff4d78-2a4b-baaa-d354-8e1063795eb1
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Small recipes to tweak munin configurations.</p>
<h2 id="Overriding+critical+and+warning+levels" name="Overriding+critical+and+warning+levels">Overriding critical and warning levels</h2>
<p>In the node configuration enter:</p>
<pre><code>plugin.field_name.critical value
plugin.field_name.warning value</code></pre>
<p>The plugin name can be found by clicking in the graph with the value
you want to override. The last component of the url (without the <code>.html</code>)
is the plugin name.</p>
<p>The field name is in this view under the column <code>Internal name</code>.</p>
<p>Example:</p>
<pre><code>[xenhosts;cn4.localnet]
address cn4.localnet
use_node_name no
vgs.pool0_.warning 99.00
vgs.pool0_.critical 99.50</code></pre>
<h2 id="Reference+documentation" name="Reference+documentation">Reference documentation</h2>
<ul>
<li><a href="http://guide.munin-monitoring.org/en/latest/reference/munin.conf.html#field-directives">config field directives</a></li>
</ul>
Desktop Environments 2023
urn:uuid:539df26d-1f8a-1661-f944-4ebe54aabe22
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Around April 2023, I decided to look for a new <a href="https://en.wikipedia.org/wiki/Desktop_environment">Linux Desktop Environment</a> for my
personal <a href="https://voidlinux.org/">void linux</a>. So I tried these distros:</p>
<ul>
<li><a href="https://lxqt-project.org/about/">lxqt</a>: This is the one I eventually chose to switch to. I liked it that it
was very small and light, and very modular, almost like a kit that you assemble
yourself. Because I have thinking for some time that I would like to make my
own <a href="https://en.wikipedia.org/wiki/Desktop_environment">Desktop environment</a> from a <a href="https://en.wikipedia.org/wiki/Window_manager">Window Manager</a>, <a href="https://lxqt-project.org/about/">lxqt</a> is very close
to what I wanted to do with that idea.
<ul>
<li><strong>PROS</strong>:
<ul>
<li>Light and modular</li>
<li>Classic User Experience (great since I am very old computer user).</li>
<li>Uses <a href="https://www.jwz.org/xscreensaver/">XScreenSaver</a></li>
<li>Uses QT widget set.</li>
</ul></li>
<li><strong>CONS</strong>
<ul>
<li>not as visually appealing as the other desktops here.</li>
</ul></li>
</ul></li>
<li><a href="https://mate-desktop.org/">MATE Desktop</a>: This is the desktop that I was using before. It is light on
resources and has all the features you would expect. Nothing against it, but I thought
it was time to move on. Also my <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2020/mate-screensaver-hacks">XScreenSaver</a> stopped working in the latest
<a href="https://voidlinux.org/">void</a> update.
<ul>
<li><strong>PROS</strong>
<ul>
<li>Full feature yet light on resources</li>
<li>Classic User Experience (great since I am very old computer user).</li>
<li>Was possible to use <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2020/mate-screensaver-hacks">XScreenSaver</a>.</li>
</ul></li>
</ul></li>
<li><a href="https://www.gnome.org/">standard gnome</a>: This is supposed to be the <strong>Premier</strong> Linux [Desktop Environment].
Personally, I find it hard to use, as it completely deviates from User Experienced of
<a href="https://microsoft.fandom.com/wiki/Windows_95">Microsoft Windows 95</a>. WHile it may be innovative, for me, an old computer user,
it just gets in the way of getting things done. So for me it is a total turn-off.
Also, under <a href="https://voidlinux.org/">void</a> I was only able to use the <a href="https://en.wikipedia.org/wiki/Wayland_(protocol)">Wayland</a> session.
<ul>
<li><strong>PROS</strong>
<ul>
<li>Very modern and popular desktop</li>
</ul></li>
<li><strong>CONS</strong>
<ul>
<li>Too modern for my taste</li>
<li>Could only run on <a href="https://en.wikipedia.org/wiki/Wayland_(protocol)">wayland</a>.</li>
</ul></li>
</ul></li>
<li><a href="https://wiki.gnome.org/Projects/GnomeFlashback">gnome flashback</a> : This returns the <a href="https://www.gnome.org/">gnome</a> desktop back to a <em>classic</em>
user experience. I personally find this UX much better than the <a href="https://www.gnome.org/">standard gnome</a> user
experience. Visually, was very smooth and appealing. I found it quite pleasant. I was also
able to run it as part of the X session.
<ul>
<li><strong>PROS</strong>
<ul>
<li>Buttery smooth look and feel</li>
</ul></li>
</ul></li>
<li><a href="https://kde.org/plasma-desktop/">KDE plasma</a> : This is the <a href="https://kde.org/plasma-desktop/">KDE project</a> <a href="https://en.wikipedia.org/wiki/Desktop_environment">Desktop Environment</a>. I found
it OK, but not particularly special. I just tried it on <a href="https://voidlinux.org/">void</a> and managed it to get it
to work and do stuff, but didn't feel worth it to stick around too much.</li>
<li><a href="https://www.xfce.org/">XFCE4</a>: This is another classic UX <a href="https://en.wikipedia.org/wiki/Desktop_environment">Desktop Environment</a>. Just like <a href="https://mate-desktop.org/">MATE</a>
is a full featured desktop that tends to be light on resources. I find it stable and a
good performer. The main reason why I did not opt for <a href="https://www.xfce.org/">xfce</a> is that this is the desktop
I usually use for Virtual Machines that I spin up on the cloud. So it is useful to have a
visually different UX so as not to get confused between working locally and working on a cloud
system.
<ul>
<li><strong>PROS</strong>:
<ul>
<li>Classic User Experience (great since I am very old computer user).</li>
</ul></li>
</ul></li>
<li><a href="https://en.wikipedia.org/wiki/Budgie_(desktop_environment)">budgie</a>: Did not work for me on <a href="https://voidlinux.org/">void linux</a>. It would start, but the mouse
and keyboard would be unresponsive.</li>
<li><a href="https://en.wikipedia.org/wiki/Cinnamon_(desktop_environment)">cinnamon</a>: Did not work for me on <a href="https://voidlinux.org/">void linux</a>. It would start, but the mouse
and keyboard would be unresponsive.</li>
<li><a href="https://www.fosslinux.com/4652/pantheon-everything-you-need-to-know-about-the-elementary-os-desktop.htm">pantheon</a>: This is a desktop that I would like to try, but it is
not available on <a href="https://voidlinux.org/">Void Linux</a>.</li>
</ul>
<h2 id="Pet+Peeves" name="Pet+Peeves">Pet Peeves</h2>
<h3 id="Wayland" name="Wayland">Wayland</h3>
<p>The <a href="https://en.wikipedia.org/wiki/Wayland_(protocol)">Wayland</a> project started back in 2008. It started as a modern Window
system implementation, slated to replace <a href="https://en.wikipedia.org/wiki/X.Org_Server">Xorg</a>. I am writing this is 2023,
and after 15 years, it is only partially available. As far as I can tell, the only
desktop environments that support <a href="https://en.wikipedia.org/wiki/Wayland_(protocol)">wayland</a> are <a href="https://www.gnome.org/">gnome</a> and
<a href="https://kde.org/plasma-desktop/">plasma</a>.</p>
<p>I think this is a pity, but that is how things are in Open Source without a real
corporate sponsor.</p>
<h3 id="GTK%2B+libraries" name="GTK%2B+libraries">GTK+ libraries</h3>
<p>I find this is a bit of a mess. I think it is ude with how the <a href="https://www.gnome.org/">gnome</a> project
is evolving it software components. I use <a href="https://voidlinux.org/">void linux</a> and at the time of this
writing I can install from the <a href="https://voidlinux.org/">void</a> repositories three different versions of
<a href="https://en.wikipedia.org/wiki/GTK">GTK+</a>, v2, v3 and v4. Also, I tried compiling a package called <a href="https://xnee.wordpress.com/">Xnee</a> and
that actually seemed to require v1.</p>
<p>Anyways, because different programs require different <a href="https://en.wikipedia.org/wiki/GTK">GTK+</a> release levels, this
means that my working <a href="https://voidlinux.org/">void</a> desktop has all three versions installed.</p>
<p>I personally find this quite messy. Again, this is a fact of life with fully Open Source
Software.</p>
Personal thoughts on GUI programming
urn:uuid:85d14b91-106c-a764-5f5a-1d41bfcd946e
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I have been programming for about 40 years. Going through a lot of different languages
and programming paradims.</p>
<p>Lately most of my programming is done in:</p>
<ul>
<li><a href="https://www.gnu.org/software/bash/">bash/shell</a> script</li>
<li><a href="https://www.python.org/">Python</a></li>
<li><a href="https://www.php.net/manual/en/intro-whatis.php">PHP</a>, with some bits in <a href="https://en.wikipedia.org/wiki/JavaScript">JavaScript</a></li>
</ul>
<p>So lately, after hitting some bugs in a recent update of the <a href="https://mate-desktop.org/">MATE</a> desktop environment
I decided to re-do my desktop set-up switching to a new desktop environment. After testing
several desktop environments I decided for <a href="https://lxqt-project.org/about/">LxQt</a> because of its minimalistic feel.
While doing this, I figured, I needed some GUI scripts to spice things up.</p>
<p>In the past, most of these simple GUI scripts, I have done them in <a href="https://www.tcl.tk/">TCL/TK</a>. So I am very
familiar writting GUI applications using <a href="https://www.tcl.tk/">TCL</a>.</p>
<p>Since, <a href="https://www.tcl.tk/">TCL</a> is not a popular language (in a <a href="https://distantjob.com/blog/programming-languages-rank/">survey at the beginning of 2023</a> it
wasn't even mentioned. On the other hand, <a href="https://www.python.org/">python</a> seemed quite popular.</p>
<p>I decided, that it probably would be a good idea to write these GUI scripts in <a href="https://www.python.org/">python</a>.
Since, in <a href="https://www.python.org/">python</a> you can also get a <a href="https://en.wikipedia.org/wiki/Tkinter">tkinter</a> module that seems to be
quite ubiquitous. i.e. <em>"batteries included"</em> distributions have it, and most Linux distributions
package it. Also, since is supposed to be a straight-port from <a href="https://www.tcl.tk/">TCL/TK</a> I though that
the learning curve would be pretty smooth.</p>
<p>So, I tried it using for a couple of scripts, a <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2020/pa-hints">volume control</a> and a [macro recording][xm]
utilities.</p>
<p>I would have to admit that I find writting <a href="https://en.wikipedia.org/wiki/Tkinter">python3 tkinter</a> GUIs very awkward. Because
it is a direct translation of <a href="https://www.tcl.tk/">TCL/TK</a> I keep thinking in <a href="https://www.tcl.tk/">TCL</a> terms, but things do
not work the same in <a href="https://www.python.org/">python</a>.</p>
<p>Because, it was easier for me, the other GUI utilities, <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2023/local-startup">local-startup</a> and
<a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2023/global-hotkeys">hk_helper</a> were written in <a href="https://www.tcl.tk/">TCL</a></p>
<p>So, my conclusion is that while I can write GUI scripts in <a href="https://www.python.org/">python</a>, the learning
curve is still steep, and I need to get a lot more experience before I can consider myself
familiar with it.</p>
<p>Also, the decision to switch to <a href="https://www.tcl.tk/">tcl</a> for the scripts that were eventually written
in <a href="https://www.tcl.tk/">tcl</a>, was driven not just for the familiarity, but because the use case required
spawning new processes, which in <a href="https://www.tcl.tk/">tcl</a> is far easier than in <a href="https://www.python.org/">python</a>.</p>
<p>Sometimes I think using a more <em>pythonic</em> GUI library instead of <a href="https://en.wikipedia.org/wiki/Tkinter">tkinter</a> would
be better. But, like I mentioned earlier, <a href="https://en.wikipedia.org/wiki/Tkinter">tkinter</a> gets the job done, and can
be found with <a href="https://www.python.org/">python</a> very often.</p>
NacoWiki
urn:uuid:212c8a88-04c6-4bfb-3d86-29f863286bf6
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A few months ago I extensibly modify <a href="https://github.com/luckyshot/picowiki">picowiki</a> and creeated <a href="https://github.com/iliu-net/nanowiki">NanoWiki</a>.</p>
<p>After using <a href="https://github.com/iliu-net/nanowiki">NanoWiki</a> for a few months, the code became somewhat of an
spaghetti mess. This has to do that <a href="https://github.com/luckyshot/picowiki">picowiki</a> was designed as a single
file, single class application with Plugin extension. Since every change went
to a single class this quickly became difficult to manage.</p>
<p>Additionally, I realized that "NanoWiki" was not a very good name as this
name was used by several projects and organizations.</p>
<p>So, I went ahead and re-wrote the whole thing into <a href="https://github.com/iliu-net/NacoWiki/">NacoWiki</a>. These are
the changes:</p>
<ul>
<li>Cleaner and more functional UI.</li>
<li>Initial REST-API support.</li>
<li>Added a CLI interface.</li>
<li>Off-tree installation, with the option of co-existing multiple instances.</li>
<li>Code modularization,
<ul>
<li><code>nacowiki</code> main class that integrates everything together.</li>
<li><code>Core</code>: main WIKI functionality</li>
<li><code>Cli</code>: CLI interface</li>
<li><code>PluginCollection</code>: plugin support</li>
</ul></li>
<li><code>CodeMirror</code> support now in the <code>Core</code> (instead of depending of Plugin implementations.</li>
<li>Re-organized CSS files</li>
<li>Raw/source code display.</li>
</ul>
<p>As plugins:</p>
<ul>
<li>Page handlers:
<ul>
<li>HTML</li>
<li>Markdown</li>
<li><em>NEW</em> source code</li>
</ul></li>
<li><em>NEW</em> YouTube Links</li>
<li><em>NEW</em> Static site generator</li>
<li>Emojis</li>
<li>File includes</li>
<li>Var snippets</li>
<li>WikiLinks (<em>NEW</em>: search article names)</li>
</ul>
<p>Features/improvements over <a href="https://github.com/luckyshot/picowiki">picowiki</a>:</p>
<ul>
<li>file management: create, delete, rename, modify, attach, etc.</li>
<li>hooks for access control</li>
<li>meta data support</li>
<li>Disabled code execution. This can be considered a <em>"security"</em> feature.</li>
<li>Support for byte ranges. This lets you stream video files directly
from the wiki.</li>
<li>toggable, folder or document views.</li>
<li>theme support</li>
<li>Multiple file type handling</li>
</ul>
<h2 id="Documentation" name="Documentation">Documentation</h2>
<p>Documentation is now handled by <a href="https://www.phpdoc.org/">phpDocumentor</a> plus HTML generated using
<a href="https://github.com/iliu-net/NacoWiki/">NacoWiki</a>'s own <code>SiteGen</code> plugin. You can find that here:</p>
<ul>
<li><a href="https://iliu-net.github.io/NacoWiki/php-api/">phpdoc generated documentation</a></li>
<li><a href="https://iliu-net.github.io/NacoWiki/">NacoWiki SiteGen generated docs</a></li>
</ul>
Python Development 2023
urn:uuid:55a004eb-f748-7b55-bf7a-983918e0195f
2024-03-05T00:00:00+01:00
Alejandro Liu
<div id="toc"><ul>
<li><a href="#Python+development+on+Windows">Python development on Windows</a></li>
<li><a href="#Distributing+Python+scripts+as+single+EXE+or+Directory">Distributing Python scripts as single EXE or Directory</a></li>
<li><a href="#Installing+netifaces+on+Windows">Installing netifaces on Windows</a></li>
<li><a href="#Documentation+generation">Documentation generation</a>
<ul>
<li><a href="#Using+%5Bsphinx%5D%5Bsphinx%5D+for+documentation">Using [sphinx][sphinx] for documentation</a>
<ul>
<li><a href="#Example+docstring">Example docstring</a></li>
</ul></li>
</ul></li>
<li><a href="#Passing+Reserved+Keywords+as+Keyword+arguments">Passing Reserved Keywords as Keyword arguments</a></li>
</ul></div>
<hr />
<h2 id="Python+development+on+Windows" name="Python+development+on+Windows">Python development on Windows</h2>
<p>For windows, you can use <a href="https://winpython.github.io/">WinPython</a>.
I prefer to use this instead of the official distribution because:</p>
<ol>
<li>It is a portable distro.</li>
<li>You can choose for a batteries included, or just the <code>dot</code> release
which only contains Python and Pip.</li>
</ol>
<p>This way I can have multiple versions available. This is particularly
useful for me because of my <a href="https://open-telekom-cloud.com" title="Open Telekom Cloud">OTC</a> development which uses
<a href="https://wiki.openstack.org/wiki/SDKs">OpenStack SDK</a>. In my Windows system, I was only able
to make it work with Python v3.8.10 (due to some ancient dependancies).
Because I usually don't have a compiler, I can only use binary distros
and I am unable to find it for a newer Python version.</p>
<h2 id="Distributing+Python+scripts+as+single+EXE+or+Directory" name="Distributing+Python+scripts+as+single+EXE+or+Directory">Distributing Python scripts as single EXE or Directory</h2>
<p>When distributing I had good results with <a href="https://pyinstaller.org/en/stable/">pyinstaller</a>.
You can create a single folder or a single exe distribution. The
results work suprisingly well (if they work). Some hints when using
<a href="https://pyinstaller.org/en/stable/">pyinstaller</a>:</p>
<ul>
<li>Cross packaging is not possible. If you need a Windows package
you need to run it on Windows.</li>
<li>Dependancies not always work correctly. You may need to use
these options:
<ul>
<li><code>--hidden-import module</code></li>
<li><code>--collect-data module</code></li>
<li><code>--copy-metadata module</code></li>
<li><code>--collect-all package</code></li>
</ul></li>
<li>In some cases, you may need to force the inclusion of non-python
files. Use:
<ul>
<li><code>set sitedir=%WINPYDIR%\Lib\site-packages</code></li>
<li>And in the command line:</li>
<li><code>--add-data %sitedir%\path\to\data\file;path\to\data</code></li>
<li>I use the <code>%sitedir%</code> variable to find things in the Python packages
directory.</li>
</ul></li>
<li>It is best to create a batch file to issue the <code>pyinstaller</code> command.</li>
<li>Because the command-line could become quite long, you can use the <code>^</code>
escape. Example:
<pre><code>pyinstaller %buildtype% ^
--hidden-import keystoneauth1 ^
--collect-data keystoneauth1 ^
--copy-metadata keystoneauth1 ^
--hidden-import os_service_types ^
--collect-data os_service_types ^
--copy-metadata os_service_types ^
--collect-all openstacksdk ^
--copy-metadata openstacksdk ^
--add-data %sitedir%\openstack\config\defaults.json;openstack\config ^
--hidden-import keystoneauth1.loading._plugins ^
--hidden-import keystoneauth1.loading._plugins.identity ^
--hidden-import keystoneauth1.loading._plugins.identity.generic ^
urotc.py</code></pre></li>
</ul>
<h2 id="Installing+netifaces+on+Windows" name="Installing+netifaces+on+Windows">Installing <code>netifaces</code> on Windows</h2>
<p>NOTE: I tested this on Feb 2023. See
<a href="https://allones.de/2018/11/05/python-netifaces-installation-microsoft-visual-c-14-0-is-required/">Original article here</a></p>
<p><code>netifaces</code> is a <a href="https://wiki.openstack.org/wiki/SDKs">OpenStack SDK</a> dependancy. Under
version 3.8.10 I am able to install using <code>--only-binary=netifaces</code> option.</p>
<p>For newer versions it will fail with <code>Microsoft Visual C++ 14.0 is required</code>
error message.</p>
<pre><code>C:\RH>pip install netifaces
Collecting netifaces
Downloading https://files.pythonhosted.org/packages/81/39/4e9a026265ba944ddf1fea176dbb29e0fe50c43717ba4fcf3646d099fe38/netifaces-0.10.7.tar.gz
Installing collected packages: netifaces
Running setup.py install for netifaces ... error
Complete output from command c:\users\rh\appdata\local\programs\python\python37\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\RH\\AppData\\Local\\Temp\\pip-install-wbfanly3\\netifaces\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\RONALD~1.HEI\AppData\Local\Temp\pip-record-m26yfbyt\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_ext
building 'netifaces' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools</code></pre>
<p>Since the suggested URL doesn't work. You need to do the following:</p>
<ol>
<li>Go to the Microsoft-Repository
<a href="https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2017">Tools for Visual Studio 2017</a>
or use the direct link to
<a href="https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=15">vs_buildtools.exe</a>.
<ul>
<li>... it’s about 1.2MB</li>
</ul></li>
<li>run „vs_buildtools.exe“
<ul>
<li>it downloads ~ 70 MB</li>
</ul></li>
<li>Select <code>Workloads => Windows => [x] Visual C++ Build Tools“ => [Install]</code>
<ul>
<li>it downloads 1.12 GB</li>
<li>and installs</li>
</ul></li>
<li>Re-boot (I don't know if it is required, but I did it just in case)</li>
</ol>
<p>Now <code>netifaces</code> can get installed:</p>
<pre><code>C:\RH>pip install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/81/39/4e9a026265ba944ddf1fea176dbb29e0fe50c43717ba4fcf3646d099fe38/netifaces-0.10.7.tar.gz
Installing collected packages: netifaces
Running setup.py install for netifaces ... done
Successfully installed netifaces-0.10.7</code></pre>
<hr />
<h2 id="Documentation+generation" name="Documentation+generation">Documentation generation</h2>
<p>When programming documentation is important, allthough very often
it takes a back seat.</p>
<p>To help keep it up to date, it is good to make it so it is easier to
maintain and update. One way to do that is with keeping documentation
and code together, and automating the way documentation is generated.</p>
<p>There is a number of solutions to do this. The one I looked at the most
were:</p>
<ul>
<li><a href="https://www.mkdocs.org/">mkdocs</a> with <a href="https://mkdocstrings.github.io/python/">mkdocstrings</a> :
which is nice because it uses <code>markdown</code>, however, because it is
essentially a static site generator, a lot of things needed to be done manually.</li>
<li><a href="https://www.sphinx-doc.org/en/master/">sphinx</a> : At the end, <a href="https://www.sphinx-doc.org/en/master/">sphinx</a> was the option that I liked
the most. It uses <a href="https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html">RST</a>
for markup which is different from <code>markdown</code>, but it is close enough.
Also, <a href="https://www.sphinx-doc.org/en/master/">sphinx</a> can also support <code>markdown</code> via some extensions
but I did not try that.</li>
</ul>
<h3 id="Using+%5Bsphinx%5D%5Bsphinx%5D+for+documentation" name="Using+%5Bsphinx%5D%5Bsphinx%5D+for+documentation">Using <a href="https://www.sphinx-doc.org/en/master/">sphinx</a> for documentation</h3>
<p>Prepare your environment:</p>
<pre><code>pip install sphinx sphinx-argparse</code></pre>
<p>In your project directory, I have two folders:</p>
<ul>
<li><code>docs</code> : where the documentation source resides</li>
<li><code>src</code> : where the <code>python</code> code resides</li>
</ul>
<p>Also, I ignore:</p>
<ul>
<li><code>public</code> : where the generated documentation is created. This can
then be added into a CI pipeline to publish documentation.</li>
</ul>
<p>Run:</p>
<pre><code>sphinx-quickstart</code></pre>
<p>In the <code>docs</code> folder, to initialize things. This will create the files:</p>
<ul>
<li><code>conf.py</code></li>
<li><code>Makefile</code></li>
<li><code>index.rst</code></li>
</ul>
<p>Modify <code>conf.py</code> to:</p>
<ul>
<li>include the source:
<pre><code class="language-python">sys.path.insert(0, os.path.abspath('../src'))</code></pre></li>
<li>Customize project meta data</li>
<li>Include enable desired extensions. For max automation I enable:
<ul>
<li>sphinx.ext.autodoc</li>
<li>sphinx.ext.autosummary</li>
</ul></li>
</ul>
<p>Modify <code>Makefile</code> to run:</p>
<pre><code>sphinx-apidoc -o apidoc ../src</code></pre>
<p>This command extracts from <code>python</code> docstrings and creates the
relevant <code>rst</code> files.</p>
<p>For command line arguments, I use <code>arparse</code> extension. Create a
<code>cli.rst</code> like so:</p>
<pre><code>CLI
===
.. argparse::
:filename: ../src/cli.py
:func: cli_args
:prog: cli.py</code></pre>
<p>Where <code>cli.py</code> contains a function <code>cli_args</code> that returns an <code>ArgumentParser</code>
object.</p>
<h4 id="Example+docstring" name="Example+docstring">Example <code>docstring</code></h4>
<p>This is an example doc string to add to your code, after the element
declaration:</p>
<pre><code class="language-python"> '''
Summary text
Description of the function
:param str argname: argument passed
:returns bool: Returns a boolean True on success, False on failure
'''</code></pre>
<p>No Need to make it too complicated.</p>
<h2 id="Passing+Reserved+Keywords+as+Keyword+arguments" name="Passing+Reserved+Keywords+as+Keyword+arguments">Passing Reserved Keywords as Keyword arguments</h2>
<p>Very often when using <em>wrapped</em> APIs, that used functions with
Keyword arguments that you would need to pass reserved
keywords (such as <code>class</code>, or <code>import</code>) as function keywords.</p>
<p>Of course, this is <strong>NOT</strong> allowed in python. And you will get an error
like:</p>
<pre><code>SyntaxError: Invalid syntax</code></pre>
<p>To work around that, you need to place those keywords in a dictionary
and use <code>**</code> notation. So instead of:</p>
<pre><code class="language-python">response = client.service.SendSMS( toNum = '0666666666666',
pass = '123456'}
)</code></pre>
<p>you would:</p>
<pre><code class="language-python">response = client.service.SendSMS( toNum = '0666666666666',
**{'pass': '123456'}
)</code></pre>
Raspberry Pi emulation with Qemu
urn:uuid:4429db1c-f58c-8148-9ac7-b411dadd3d6f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The idea here is that we use a Desktop PC for developing/debugging
Raspberry Pi set-ups using <a href="https://www.qemu.org/">qemu</a> for emulating Rasperrby Pi.</p>
<p><a href="https://www.qemu.org/">qemu</a> currently supports the following configurations:</p>
<ul>
<li>Raspberry Pi Zero and 1A+ (armhf)</li>
<li>Raspberry Pi 2B (armv7)</li>
<li>Raspberry Pi 3A+ (aarch64)</li>
<li>Raspberry Pi 3B (aarch64)
<ul>
<li>This is the version I am targetting in this article. I already recycled all
my older boards.</li>
<li>Actually I tried emulating the other configurations but they did not work.
Either they failed to boot, or the graphic display wouldn't work.</li>
<li>So <code>raspi3b</code> with 64-bit run-time is the only configuration I was able
to succesfully boot.</li>
</ul></li>
<li><strong>NOTE that Raspberry Pi 4 is not supported at the moment.</strong></li>
</ul>
<p>So, unfortunately the state of things is far from perfect.</p>
<h2 id="Missing+display+bug" name="Missing+display+bug">Missing display bug</h2>
<p>During my tests on the <code>raspi3b</code> configuration, I was not able to get a working
console. This has to do with this
<a href="https://github.com/raspberrypi/linux/commit/6513403f73e9bdf842597d10cb0b4775ae74d165">commit</a>,
which disables the Frame Buffer driver because <a href="https://www.qemu.org/">qemu</a> doesn't seem to
report the display properly. This causes the error to show up on the kernel log:</p>
<pre><code>bcm2708_fb soc:fb: Unable to determine number of FBs. Disabling driver.</code></pre>
<p>Before the commit, the kernel would assume that there was always <strong>one</strong> display.
On the other hand, this only affects use-cases that require display. For headless
development, using the serial port works just fine.</p>
<p>For <a href="https://alpinelinux.org/">Alpine Linux</a>, the last working Frame Buffer version seems to be
<code>3.16.3-aarch64</code>. The display did not work for <code>3.17.0-aarch64</code>. The 32 bit
<code>3.16.3</code> would display the <code>Disabling driver.</code> message but I wasn't able to
boot further than that.</p>
<h2 id="Emulated+hardware" name="Emulated+hardware">Emulated hardware</h2>
<p>According to the <a href="https://www.qemu.org/docs/master/system/arm/raspi.html">qemu documentation</a>, the following is impleted:</p>
<ul>
<li>ARM1176JZF-S, Cortex-A7 or Cortex-A53 CPU. I only tested the Cortex-A53 CPU
for the <code>raspi3b</code> configuration.</li>
<li>Interrupt controller</li>
<li>DMA controller</li>
<li>Clock and reset controller (CPRMAN)</li>
<li>System Timer</li>
<li>GPIO controller</li>
<li>Serial ports (BCM2835 AUX - 16550 based - and PL011)</li>
<li>Random Number Generator (RNG)</li>
<li>Frame Buffer : However the Linux kernel does not seem to find it.</li>
<li>USB host (USBH)</li>
<li>GPIO controller</li>
<li>SD/MMC host controller</li>
<li>SoC thermal sensor</li>
<li>USB2 host controller (DWC2 and MPHI)</li>
<li>MailBox controller (MBOX)</li>
<li>VideoCore firmware (property)</li>
</ul>
<p>As you can see, no network interface is implemented, so you must use a USB network.</p>
<h2 id="Getting+started" name="Getting+started">Getting started</h2>
<p>The basic command line I am using is:</p>
<pre><code> qemu-system-aarch64 \
-machine raspi3b -cpu cortex-a53 -m 1G -smp 4 -dtb bcm2710-rpi-3-b-plus.dtb \
-kernel $linux_kernel -initrd $linux_initrd -append "$cmdline" \
-sd $sd_image \
-serial stdio \
-usb \
-device usb-mouse -device usb-kbd \
-device usb-net,netdev=net0 -netdev user,id=net0,hostfwd=tcp::5555-:22</code></pre>
<p>Command explanation:</p>
<ul>
<li><code>qemu-system-aarch64</code> : emulate a 64-bit ARM system</li>
<li><code>-machine raspi3b -cpu cortex-a53 -m 1G -smp 4 -dtb bcm2710-rpi-3-b-plus.dtb</code> :
Matches the Raspberry Pi model 3B configuration. The <code>dtb</code> is a file from the
Raspberry Pi boot partition that is normally loaded by the Firmware.</li>
<li><code>-kernel $linux_kernel -initrd $linux_initrd -append "$cmdline"</code> :
Linux related boot configuration. You must provide a kernel and optional initrd
files. Usually you would extract them from your <code>sdcard</code> image. The append
is used for the kernel command line. If you want a serial console make sure
you include:
<ul>
<li><code>console=ttyAMA0,115200</code></li>
</ul></li>
<li><code>sd $sd_image</code> : Image for the <code>sdcard</code> storage</li>
<li><code>-usb</code> : Enable USB bus. Needed for the emulated console mouse/keyboard and usb
network.</li>
<li><code>-serial stdio</code> : enable a serial console (if you are using the emulated
framebuffer. Note that <code>Ctrl+C</code> are not caught and would kill the emulation.</li>
<li><code>-device usb-mouse -device usb-kbd</code> : these are used with the virtual framebuffer
for providing keyboard and mouse.</li>
<li><code>-device usb-net,netdev=net0 -netdev user,id=net0,hostfwd=tcp::5555-:22</code> :
Enable virtual networking using <a href="https://en.wikipedia.org/wiki/Slirp">slirp</a>.</li>
</ul>
<p>If you wish to run a headless (only serial console) configuration, you should
remove the <code>-serial stdio -device usb-mouse -device usb-kbd</code> options and
just use:</p>
<ul>
<li><code>-nographic</code></li>
</ul>
<p>This would automatically enable <code>-serial stdio</code> and remove the framebuffer. In
this configuration <code>Ctrl-C</code> is handled properly.</p>
<h2 id="Tested+OS" name="Tested+OS">Tested OS</h2>
<p>I tested the following images, with these results:</p>
<table>
<thead>
<tr>
<th>Operating System</th>
<th>Status</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://downloads.raspberrypi.org/raspios_lite_arm64/images/raspios_lite_arm64-2020-08-24/">2020-08-20-raspios-buster-arm64-lite.zip</a></td>
<td>fully working</td>
</tr>
<tr>
<td><a href="https://downloads.raspberrypi.org/raspios_lite_arm64/images/raspios_lite_arm64-2022-09-26/">2022-09-22-raspios-bullseye-arm64-lite.img.xz</a></td>
<td>Only works <em>headless</em>. Default user is not set properly, so the image needs to be modified to inject login credentials</td>
</tr>
<tr>
<td><a href="https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/aarch64/">alpine-rpi-3.16.3-aarch64.tar.gz</a></td>
<td>fully working</td>
</tr>
<tr>
<td><a href="https://dl-cdn.alpinelinux.org/alpine/v3.17/releases/aarch64/">alpine-rpi-3.17.0-aarch64.tar.gz</a></td>
<td>Only works <em>headless</em></td>
</tr>
</tbody>
</table>
<h2 id="raspi-emu" name="raspi-emu">raspi-emu</h2>
<p>For convenicne, I wrote the <code>raspi-emu</code> script:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/raspi-emu">rasi-emu</a></li>
</ul>
<p>This can be used to prepare images and run emulation sessions.</p>
<p>Usage:</p>
<h3 id="Preparing+base+image" name="Preparing+base+image">Preparing base image</h3>
<pre><code>raspi-emu prep [options] src</code></pre>
<p>Prepares the downloaded image so it can be used as a base for a <a href="https://www.qemu.org/">qemu</a> thin-provisioned
image.</p>
<p>Options:</p>
<ul>
<li><code>--sz=size</code> : Set the base image to the given <code>size</code>.</li>
<li><code>-c</code> | <code>--compress</code> : For <code>qcow2</code> images, create a compressed image.</li>
<li><code>--qcow2</code> : Create a <code>qcow2</code> format image. This is the default.</li>
<li><code>--raw</code> : Create a <code>raw</code> image.</li>
<li><code>--volume=name</code> : When creating <a href="https://alpinelinux.org/">AlpineLinux</a> images, use <code>name</code> as the
volume name. Otherwise a random name is generated.</li>
</ul>
<h3 id="Formatting+SDCARD+image" name="Formatting+SDCARD+image">Formatting SDCARD image</h3>
<pre><code>raspi-emu format [options] base dest</code></pre>
<p>Create an SDCARD image to be used for <a href="https://www.qemu.org/">qemu</a> emulation. It will
create a thin-provisioned image when possible.</p>
<p>Options:</p>
<ul>
<li><code>--resize=size</code> : Set the SDCARD image to the given size.</li>
</ul>
<h3 id="Running+Emulation" name="Running+Emulation">Running Emulation</h3>
<pre><code>raspi-emu run [options] sdimg</code></pre>
<p>Will boot <a href="https://www.qemu.org/">qemu</a> emulation with the specified SDCARD image. Configuration
when possible is read from the boot partition of the SDCARD.</p>
<p>Options:</p>
<ul>
<li><code>--vfb-only</code> : Enable the virtual framebuffer and disables the serial console.</li>
<li><code>--vfb</code> : Enables the virtual framebuffer. The serial console is kept enabled.</li>
<li><code>--no-vfb</code> : Disables virtual framebuffer. This is the default.</li>
<li><code>--ttycon</code> : Enables the serial console for Linux logins. (Default)</li>
<li><code>--no-ttycon</code> : Disables the serial console for Linux logins.</li>
<li><code>--vnet</code> : Enable virtual network. (Default)</li>
<li><code>--no-vnet</code> : Disables hte virtual network.</li>
<li><code>--portfwd</code> : Enables virtual network. Forwards port 5555 on host to port 22 on VM.</li>
<li><code>--portfwd=rule</code> : Adds the given port forwarding rule. <br />Example rule: <code>tcp::5555-:22</code></li>
<li><code>--no-portfwd</code> : Dsiables port forwarding. (Default)</li>
<li><code>--raspi3b</code> : Emulate a Raspberry Pi Model 3B. (Default)</li>
</ul>
<p>The default is running headless (only serial console) with networking enabled.</p>
Home Assistant Wall Panel
urn:uuid:d92016f2-d991-eb55-cbac-f2ea92ace405
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>For a while I was using <a href="https://github.com/HyperTechnology5/TabletClock">TabletClock</a> with old tablets. But
this has not been updated in a while. I was thinking of writing my
own version until I found <a href="https://github.com/thecowan/wallpanel-android">WallPanel</a>.</p>
<p>Essentially it is purposely built web-browser with special features which makes it
possible to use it to replaces <a href="https://github.com/HyperTechnology5/TabletClock">TabletClock</a>. Essentially, I could
replace <a href="https://github.com/HyperTechnology5/TabletClock">TabletClock</a> with a web-page showing the time and a weather
forecast and have <a href="https://github.com/thecowan/wallpanel-android">WallPanel</a> point to it. Easy and simple, and can be
customized in multiple ways.</p>
<p>Currently I am using <a href="https://github.com/thecowan/wallpanel-android">WallPanel</a> with <a href="https://www.home-assistant.io/">HomeAssistant</a>. I am using
these features:</p>
<ul>
<li>Date/time screen saver with auto off by face recognition</li>
<li>Web Cam functionality</li>
</ul>
<p>I normally have it on a <a href="https://www.home-assistant.io/">HomeAssistant</a> panel showing time, weather forecaset
and a few choice controls.</p>
<p><img src="/images/2023/wallpanel.png" alt="panel" /></p>
RDP vs VNC
urn:uuid:fa9eb052-d45a-370e-8851-89ecfa097950
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>For years I have been using <a href="https://en.wikipedia.org/wiki/Virtual_Network_Computing">VNC</a> for my remote desktop needs. This works
usually well enough. The features that I like are:</p>
<ul>
<li>Basic set-up is easy</li>
<li>Desktop sessions are persistent</li>
<li>Can be used to view an actual X11.org desktop.</li>
<li>Browser based clients via <a href="https://novnc.com/info.html">noVNC</a> or <a href="https://guacamole.apache.org/">Guacamole</a></li>
</ul>
<p>On the other hand, a number of features are either not implemented or are
not easily implementable.</p>
<ul>
<li>On-demand desktop sessions. Usually you can hack scripts to do this. Or you
can use <code>inetd</code> mode to create a session on-demand, however, this loses
session persistence.</li>
<li>While in theory, because most Desktops use <a href="https://en.wikipedia.org/wiki/PulseAudio">pulseaudio</a> which would let you
redirect audio to a remote, this is another protocol so it is not simple
to set-up in practice.</li>
</ul>
<p>So I have now found <a href="https://en.wikipedia.org/wiki/Xrdp">xrdp</a> which provides:</p>
<ul>
<li>Easy basic set-up</li>
<li>On-demand desktop sessions (session management) with persistent session support.</li>
<li>Sound redirection (however I have not been able to make this work)</li>
<li>Browser based clients via <a href="https://guacamole.apache.org/">Guacamole</a></li>
</ul>
<p>For the client side, you can either use:</p>
<ul>
<li><a href="https://4it.com.au/kb/article/how-to-start-remote-desktop-rdp-from-the-command-prompt/">RDP Client</a> : which comes with MS-Windows or</li>
<li><a href="https://www.freerdp.com/">freerdp</a> : For Linux.</li>
</ul>
<p>Still I have not been able to:</p>
<ul>
<li>Enable sound re-direction
<ul>
<li><a href="https://c-nergy.be/blog/?p=13655">Configure sound</a></li>
<li>Essentially compile <a href="https://github.com/neutrinolabs/pulseaudio-module-xrdp">module</a></li>
</ul></li>
<li>Use to view an actuall X11.org desktop. For this I am simply using <a href="https://en.wikipedia.org/wiki/X11vnc">x11vnc</a>
and just using the <code>vncviewer</code>.</li>
</ul>
QNAP Snapshots
urn:uuid:139013d4-a884-034b-13de-f5915cb3c900
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I wrote a small tool to access QNAP snapshots from the Linux command line.</p>
<p>Pre-requistes:</p>
<ul>
<li>Snapshots have to be enabled</li>
<li>You need a <code>/share/netcfg</code> containing the file:
<ul>
<li>In my case, I set this share as <code>read-only</code> with <code>root-squash</code>.</li>
<li>`admin.yaml' : contains the private/public keys and the configuration
of the forced command. For access control, it is only readable
to group and the file is owned by the UNIX group that can do snapshot
operations.</li>
<li><code>registry.yaml</code> : this is optional if you are changing the <code>admin</code> username.</li>
</ul></li>
</ul>
<p>Afterwards run:</p>
<ul>
<li><code>install_key.sh</code> <strong>server-name</strong></li>
</ul>
<p>This installs the public key into the <code>authorized keys</code>. You will need
ssh access for this to work.</p>
<p>You need to do this on all the QNAP servers that offer snapshots.</p>
<p>Copy <code>qsnap</code> to somewhere in your path.</p>
<h2 id="Usage" name="Usage">Usage</h2>
<h3 id="Listing+snapshots" name="Listing+snapshots">Listing snapshots</h3>
<pre><code class="language-bash">qsnap</code></pre>
<p>List snapshots for the current directory</p>
<p><code>bash qsnap ls file-path </code></p>
<p>List snapshots for the given <code>file-path</code>. <code>file-path</code> can be provided
multiple times.</p>
<h3 id="Reading+snapshot+files" name="Reading+snapshot+files">Reading snapshot files</h3>
<pre><code class="language-bash">qsnap cat [--snap=snapid] file1 [file2 file3 ...]</code></pre>
<p>Would display the given file(s) from the snapshot. If <code>snapid</code> is not
specified will use the latest available snapshot.</p>
<h3 id="Dumping+snapshots" name="Dumping+snapshots">Dumping snapshots</h3>
<p>``bash
qsnap tar [--snap=snapid] [options] path</p>
<pre><code>
Will dump the given `path` as a tarball. If `snapid` is not
specified will use the latest available snapshot.
The `path` can be either a file or directory.
Additional options:
- `--base64` : Data will be dumped using MIME Base64 encoding
- '--no-compress' : Default is to compress. This disables compression
- '-v' : Pass `v` flag to `tar` command.
All this can be found on [github](https://github.com/alejandroliu/0ink.net/blob/main/snippets/2023/qsnap).</code></pre>
Docker on Void
urn:uuid:90c92086-4c8d-2a4c-29a1-d323f9f155ab
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a quick recipe to run Docker on void:</p>
<ul>
<li>Make sure your system is up-to-date:
<ul>
<li><code>sudo xbps-install -Syu</code></li>
</ul></li>
<li>Install docker executables:
<ul>
<li><code>sudo xbps-install -S docker</code></li>
</ul></li>
<li>Check if docker was installed properly:
<ul>
<li><code>docker --version</code></li>
</ul></li>
<li>Enable services:
<ul>
<li><code>sudo ln -s /etc/sv/containerd /var/service</code></li>
<li><code>sudo ln -s /etc/sv/docker /var/service</code></li>
</ul></li>
<li>Add your user to the <code>docker</code> group:
<ul>
<li><code>sudo usermod -a -G docker $(whoami)</code></li>
</ul></li>
</ul>
<p>You can start using docker. You may need to logout and login again for the group
membership to be updated.</p>
<p>Here is a <a href="https://docker-curriculum.com/">beginners tutorial</a></p>
Home Assistant sensors
urn:uuid:a1c9b6f8-82f2-dfa5-ba09-b48a600085cc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I finished migrating my <a href="https://support.getvera.com/hc/en-us/articles/360021950353-Welcome-to-Vera-Getting-Started">VeraEdge</a> to <a href="https://www.home-assistant.io/">Home Assistant</a>. I think after
using it for some time, I find <a href="https://www.home-assistant.io/">Home Assistant</a> far superior to the
<a href="https://support.getvera.com/hc/en-us/articles/360021950353-Welcome-to-Vera-Getting-Started">VeraEdge</a> in every way.</p>
<p>So, I took the time to mostly standardise my sensors which make things simpler to
manage/mantain.</p>
<p>As such, essentially I am only using 4 types of sensors:</p>
<ul>
<li><a href="https://www.robbshop.nl/neo-coolcam-raam-deur-sensor-z-wave-plus">Neo Coolcam Door/Window Sensor DS01Z</a> :
This is a door/window sensor. I still have a couple of these sensors from before.
They are EOL now, being replaced by the DS07Z.<br />
<img src="/images/2023/sensor-ds01z.png" alt="DS01Z" /></li>
<li><a href="https://www.robbshop.nl/neo-coolcam-raam-deursensor-z-wave-plus-met-usb-voeding">New Coolcam Door/Window Sensor DS07Z</a> :
This is a door/window sensor with integrated temperature and humidty sensors. I
was standardirising to this sensor type for door/window and also for temperature
and humidity until they became hard to source. The last couple of sensors of
this type I had to buy from AliExpress. For the moment, I have enough sensors,
but in the future, I may need to find a new source.<br />
<img src="/images/2023/sensor-ds07z.png" alt="DS07Z" /><br />
Since this sensor can be charged using a USB port, I have an additional sensor
to be used as a "charging" sensor. That way I can replace batteries, and the
low batteries can be charged in the "charging" sensor. This is due to the fact
usually the sensors are in hard to reach places, where USB power is not
easily available.</li>
<li><a href="https://www.robbshop.nl/fibaro-smoke-sensor-2-z-wave-plus">Fibaro Smoke Sensor 2</a> :
This is a Smoke Sensor with integrated temperature meter. I am using this
to replace my <strong>"dumb"</strong> smoke sensors. These sensors only detect smoke
and do <strong>not</strong> detect CO<sub>2</sub>. This is not a problem because on one hand<sub>2</sub>se sensors are mounted on the ceiling<sub>2</sub>ch is detrimental for CO<sub>2</sub>
detection as CO<sub>2</sub> tends to accumulate on the floor first. The other
is that CO<sub>2</sub> detection is more important for smokeless fires (i.e. gas
burning). Since we are only using this for the water heater, it is less
of a priority.<br />
<img src="/images/2023/sensor-smoke.png" alt="Smoke Sensor" /></li>
<li><a href="https://www.robbshop.nl/neo-coolcam-overstromingssensor-z-wave-plus-eol">New Coolcam Water Leak sensor</a> :
This is a water leak sensor. Unfortunately is already EOL.<br />
<img src="/images/2023/sensor-leak.png" alt="Water Leak Sensor" /></li>
</ul>
<p>I used to have other sensors types:</p>
<ul>
<li>Philio Tech Door/Window Sensor</li>
<li>Philio Tech multi sensors :
While these sensors were good in the sense that they paired easily with my
<a href="https://support.getvera.com/hc/en-us/articles/360021950353-Welcome-to-Vera-Getting-Started">VeraEdge</a> and gave accurate reading, they were (at least to me)
not easy to open. So every time I would want to replace the battery
I would <strong>accidentally</strong> break the latches.</li>
<li>Other door sensors :
Also, a number of sensors that I tried, made it difficult to stock up on
spare batteries. Furthermore, specially for the door/window sensors,
some had awkward shapes.</li>
</ul>
Happy New Year 2023
urn:uuid:10f47ba9-ae4c-04c1-4f7b-c599df5b048e
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2023/newyear-2023.png" alt="NewYear2023" /></p>
<p>Best wishes for 2023!</p>
<p>This website is now <strong>10 Years Old</strong>.</p>
Home Assistant RFXCOM Integration
urn:uuid:b7cd3709-4ebc-0c1c-cf7d-06508704abb0
2024-03-05T00:00:00+01:00
Alejandro Liu
<h3 id="RFXCOM+RFXtrx" name="RFXCOM+RFXtrx">RFXCOM RFXtrx</h3>
<p><a href="https://www.home-assistant.io/integrations/rfxtrx/">This</a> integration is
to control RFXtrx devices. I am using to control Somfy blinds and KlikAanKlikUit
remotes.</p>
<ul>
<li>Add integration: <code>RFXCOM RFXtrx</code></li>
<li>Conection type: <code>Serial</code></li>
<li>Select device: <code>RFXtrx433XL - RFXtrx433XL, s/n: * - RFXCOM</code></li>
</ul>
<p>Remember to configure the RFXCOM unit <strong>before</strong> using it with <a href="https://www.home-assistant.io/">Home Assistant</a>
as <a href="https://www.home-assistant.io/">Home Assistant</a> integration has limited control of it.</p>
<p>Specifically, you need to use the RFXCOM <a href="http://www.rfxcom.com/downloads.htm">rfxmngr</a>
tool to enable the relevant protocols and in the case of Somfy blinds, pair them.</p>
<p>For my home I enabled the <code>AC</code> protocol for use with
<a href="https://klikaanklikuit.nl/">KAKU</a> devices.</p>
<h4 id="KlikAanKlikUit" name="KlikAanKlikUit">KlikAanKlikUit</h4>
<p><a href="https://klikaanklikuit.nl/">KAKU</a> are inexpensive home automation devices that
use a wireless protocol very similar to X-10.</p>
<p>The simplest way is to:</p>
<ul>
<li>List integrations</li>
<li>Configure <code>RFXTRX</code></li>
<li>Enable Automatic Add</li>
<li>Submit</li>
</ul>
<p>Afterwards, just use the lights and devices, and they will be added automatically.</p>
<h2 id="Somfy" name="Somfy">Somfy</h2>
<p>For pairing the Somfy blinds, you can refer to <a href="https://www.vlieshout.net/home-assistant-and-somfy-rts-with-rfxcom/">this article</a>
and <a href="https://www.vesternet.com/en-eu/pages/apnt-79-controlling-somfy-rts-blinds-with-the-rfxtrx433e">ths one</a>.</p>
<p>In my configuration I am using:</p>
<pre><code>remoteID: 010E1 > 0 : 10 : E1 : 010E1
123456
ABCDEF
remoteID: 69121 > 0 : 16 : 225 : 4321
unitCode: 01
1234567890123456
071a000000000000
071a00000010e101</code></pre>
<hr />
<h2 id="Tweaks" name="Tweaks">Tweaks</h2>
<p>Because for some reason my Skylight cover says "Open" when is "Closed" and
viceversa.</p>
<p>To clean that up we use the:</p>
<ul>
<li><a href="https://www.home-assistant.io/integrations/cover.template/">https://www.home-assistant.io/integrations/cover.template/</a></li>
<li><a href="https://www.home-assistant.io/integrations/cover/">https://www.home-assistant.io/integrations/cover/</a></li>
</ul>
<p>To create a "template" cover that shows things properly. In <code>configuration.yaml</code>
we have this:</p>
<pre><code class="language-yaml">cover:
- platform: template
covers:
study_skylight_p:
unique_id: 13d7a089-c536-4dc0-b2c0-e5dae6521460
open_cover:
service: cover.close_cover
target:
entity_id: cover.rfy_0010e1_1
close_cover:
service: cover.open_cover
target:
entity_id: cover.rfy_0010e1_1
stop_cover:
service: cover.stop_cover
target:
entity_id: cover.rfy_0010e1_1
</code></pre>
Home Assistant Behind Reverse Proxy
urn:uuid:d6baf888-30e6-7aed-f7ad-ff0daa5b78fa
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>To set-up a reverse proxy I took the following steps:</p>
<ul>
<li>configure DNS</li>
<li>get Letsencrypt certificates</li>
<li>Configure NGINX</li>
<li>Configure Home Assistant to trust the proxy</li>
</ul>
<p>At the time of this writing I can't really confirm if the reverse proxy
configuration for home assistant is working as I can't tell what IP
address is using for the <code>trustednetworks</code> authenticator.</p>
<p>Sample NGINX configuration:</p>
<pre><code class="language-nginx">
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Update this line to be your domain
server_name example.com;
# These shouldn't need to be changed
listen [::]:80 default_server ipv6only=off;
return 301 https://$host$request_uri;
}
server {
# Update this line to be your domain
server_name example.com;
# Ensure these lines point to your SSL certificate and key
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Use these lines instead if you created a self-signed certificate
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# Ensure this line points to your dhparams file
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
# These shouldn't need to be changed
listen [::]:443 ssl default_server ipv6only=off; # if your nginx version is >= 1.9.5 you can also add the "http2" flag here
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
# ssl on; # Uncomment if you are using nginx < 1.15.0
ssl_protocols TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
proxy_buffering off;
location / {
proxy_pass http://127.0.0.1:8123;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}</code></pre>
<p>Home Assistant <code>configuration.yaml</code> entries:</p>
<pre><code class="language-yaml">http:
# For extra security set this to only accept connections on localhost if NGINX is on the same machine
# Uncommenting this will mean that you can only reach Home Assistant using the proxy, not directly via IP from other clients.
# server_host: 127.0.0.1
use_x_forwarded_for: true
# You must set the trusted proxy IP address so that Home Assistant will properly accept connections
# Set this to your NGINX machine IP, or localhost if hosted on the same machine.
trusted_proxies: <NGINX IP address here, or 127.0.0.1 if hosted on the same machine></code></pre>
Looking up docker image tags
urn:uuid:e6d46bae-ffb6-b04e-328b-56a897e6d386
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This recipe is to check the tags defined for a specific Docker image
in <a href="https://hub.docker.com">docker.hub</a>.</p>
<p>The basic API is at <a href="https://registry.hub.docker.com/v2">https://registry.hub.docker.com/v2</a></p>
<p>So the format is as follows:</p>
<p><strong><a href="https://registry.hub.docker.com/v2/repositories/">https://registry.hub.docker.com/v2/repositories/</a></strong><strong>{namespace}</strong>/<strong>{image}</strong><strong>/tags/</strong></p>
<p>Where:</p>
<ul>
<li><strong>namespace</strong> : usually is the user account posting the image. For <strong>official</strong> images
set the <strong>namespace</strong> to <code>library</code>.</li>
<li><strong>image</strong> : Image name.</li>
</ul>
<p>Examples:</p>
<ul>
<li><a href="https://hub.docker.com/_/alpine">Docker Official alpine image</a>
<ul>
<li>namespace: library</li>
<li>image : alpine</li>
</ul></li>
<li><a href="https://hub.docker.com/r/photoprism/photoprism">photoprism/photoprism</a>
<ul>
<li>namespace : photoprism</li>
<li>image : photoprism</li>
</ul></li>
</ul>
<p>So, you can then use <code>curl</code> and <code>jq</code> to access the relevant data. For example,
to get the tags, last updated time and digest as a tsv:</p>
<pre><code class="language-bash">curl -s -L $URL | jq -r '(.results[] | select(.tag_status == "active") | [.name, .last_updated
, .digest]) | @tsv'
</code></pre>
<h2 id="v1+API" name="v1+API">v1 API</h2>
<p>You could also use the v1 API while it is still available:</p>
<p><strong><a href="https://registry.hub.docker.com/api/content/v1/repositories/public/">https://registry.hub.docker.com/api/content/v1/repositories/public/</a></strong><strong>{namespace}</strong>/<strong>{image}</strong><strong>/tags/</strong></p>
Home Assistant Large Clock
urn:uuid:cdfc0d43-19b4-436a-6107-b7172c9ec759
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This recipe is my version of providing a "large clock" face in the
<a href="https://home-assistant.io/">home assistant</a> dashboard.</p>
<p>Enable serving local static files:</p>
<ul>
<li>Create directory <code>www</code> in your <code>config</code> directory.</li>
<li>Restart <a href="https://home-assistant.io/">home assistant</a>.</li>
<li>Static files are now available as <code>http://homeassistant.local:8123/local/</code>.</li>
</ul>
<p>Place the HTML with your clock in a file i.e. <code>$config/www/clock.html</code>.
I am using this:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/hassio/clock.html"></script>
<p>Then add a <a href="https://www.home-assistant.io/dashboards/iframe/">webpage</a> card:</p>
<pre><code class="language-yaml">type: iframe
url: /local/clock.html
aspect_ratio: 45%</code></pre>
<p>Obviously you can fully exercise your HTML to get your clock to look
exactly like you want.</p>
<p>I wrote this because I couldn't get the
<a href="https://www.home-assistant.io/dashboards/markdown/">markdown</a> card
to style properly. Also, I wasn't keen on installing the
<a href="https://www.home-assistant.io/integrations/time_date/">time and date</a>
sensor which is required for the clock examples based on the
<a href="https://www.home-assistant.io/dashboards/picture-elements/">picture elements</a>
card.</p>
<p>Other implementations:</p>
<ul>
<li><a href="https://community.home-assistant.io/t/really-simple-big-clock/255971">https://community.home-assistant.io/t/really-simple-big-clock/255971</a></li>
<li><a href="https://community.home-assistant.io/t/just-a-big-clock/69976/2">https://community.home-assistant.io/t/just-a-big-clock/69976/2</a></li>
</ul>
Home Assistant HTTP Based Authentication Backend
urn:uuid:95bcddfa-0c2c-2ac9-12b3-2b2ac3dc359c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This recipe is to authenticate users using a web server providing
<a href="https://en.wikipedia.org/wiki/Basic_access_authentication">Basic HTTP authentication</a>
for it users.</p>
<p>This is useful if you want to consolidate users/passwords in a single
system. So instead of managing users on <a href="https://www.home-assistant.io/">Home Assistant</a> you can
have all users managed from a central location.</p>
<p>It uses the <a href="https://www.home-assistant.io/">Home Assistant</a>
<a href="https://www.home-assistant.io/docs/authentication/providers/#command-line">command line</a>
authentication provider and the <code>curl</code> command.</p>
<p>To make it work is quite simple. Copy this script to your <code>/config</code>
directory as <code>curl_auth.sh</code>:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/hassio/curl_auth.sh"></script>
<p>Add the following lines to your <code>configure.yaml</code>:</p>
<pre><code class="language-yaml">homeassistant:
auth_providers:
- type: command_line
command: /config/curl_auth.sh
args: [ "http://your-web-site-url/" ]
meta: true</code></pre>
<p>Make sure that you modify the URL in the configuration to a
web server that is doing Basic HTTP authentication. It uses <code>curl</code>
for checking URLs, so <code>http</code> and <code>https</code> protocols would work.</p>
<p>If using <code>https</code> with self-signed certificates, you need to pass the
<code>-k</code> option which is then passed to <code>curl</code>. See
<a href="https://man7.org/linux/man-pages/man1/curl.1.html">curl(1)</a>.</p>
<p>Example:</p>
<pre><code class="language-yaml">args: [ "-k", "https://your-web-site-url/" ]</code></pre>
Moving to Home Assistant
urn:uuid:267cd8ac-044f-0627-c4ab-2fdcbdb2c8e5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I am busy moving away from my <a href="https://support.getvera.com/hc/en-us/articles/360021753434-VeraEdge-Getting-Started-How-To">VeraEdge</a>
installation to a <a href="https://www.home-assistant.io/">Home Assistant</a> running on a <a href="https://www.raspberrypi.com/products/raspberry-pi-4-model-b/">Raspberry Pi 4</a>. This is
because it looks like the maker of the VeraEdge was bought and it is slowly being
phased out.</p>
<p>For this I am using the following parts:</p>
<ul>
<li><a href="http://wiki.geekworm.com/X728">Geekwork X728 18650 UPS + X728-C1 case</a> : This
provides with a case (with cooling fan), UPS and RTC functionality.</li>
<li><a href="https://aeotec.com/products/aeotec-z-stick-gen5/">Aeotec Z-Stick Gen5+</a> : For
Z-Wave compatibility.</li>
<li><a href="https://phoscon.de/en/conbee2">ConBee 2</a> : For ZigBee compatibility.</li>
<li>A Raspberry Pi 4 - 4GB</li>
<li>64GB SD Card. Actually I wanted to use a 32GB SD card, but the 64GB had better
specs and was only a couple os bucks more expensive.</li>
</ul>
<p>I will be reusing these components:</p>
<ul>
<li><a href="https://www.robbshop.nl/slimme-meter-kabel-usb-p1-1-meter">Smart Meter USB-P1 cable</a></li>
<li><a href="http://www.rfxcom.com/RFXtrx433XL-USB-43392MHz-Transceiver">RFXtrx433XL USB HA controller</a></li>
</ul>
<h2 id="Hardware+build" name="Hardware+build">Hardware build</h2>
<p>Building the case is simple and straight forward. You can follow this
video on <a href="https://www.youtube.com/watch?v=QOG30LXb6ds">youtube</a>.</p>
<p>The steps are:</p>
<ul>
<li>Open the case.</li>
<li>Install the fan.</li>
<li>Install the additional battery holder.</li>
<li>Install the power button.</li>
<li>Screw the spacers to the Raspberry Pi.</li>
<li>Insert the X728 UPS hat on top and screw in place.</li>
<li>Install batteries.</li>
<li>Plug connectors.</li>
<li>Screw the Raspberry Pi to the case.</li>
<li>Test that everything is in working order.</li>
<li>Optional: Set the jumper selector to auto power-on.</li>
<li>Close the case.</li>
</ul>
<h2 id="Case+software" name="Case+software">Case software</h2>
<p>I did not like the software that comes with the case, so I rolled-up
my own. The RTC uses the standard <code>rtc-ds1307</code> which is in the Linux
kernel. For GPIO programming I am using the <code>/sysfs</code> interface. The
only component that requires <em>custom</em> programming was the battery
charge and voltage readings. For that I wrote a small C program.</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2022/X728/src">x728batt</a></li>
</ul>
<p>This is tied to <code>systemd</code> through these files:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/x728clock.service">/etc/systemd/system/x728clock.service</a>
executes <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/scripts/clock.sh">clock.sh</a>
This is used to load the RTC kernel modules and activate the RTC in the
i<sup>2</sup>c bus.</li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/x728ups.service">/etc/systemd/system/x728ups.service</a>
executes <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/scripts/upsmon.sh">upsmon.sh</a>
This is used to monitor the Push Button, the A/C power status and if the
A/C power is lost, the battery status. It will trigger a graceful shutdown
if the button is pressed or if the battery charge is insufficient.</li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/gpio-poweroff">/lib/systemd/system-shutdown/gpio-poweroff</a>
This is used to turn off the UPS power if the user issues the <code>poweroff</code>
command.</li>
</ul>
<p>In addition to this, I created a script to inject the <a href="https://www.home-assistant.io/">Home Assistant</a>
Raspberry Pi image with the relevant files and also adds a
<a href="https://rauc.readthedocs.io/en/latest/reference.html#system-configuration-file">RAUC post-install handler</a>.
so that OTA upgrades will keep these customizations.</p>
<p>As a bonus I am adding a <a href="https://github.com/TortugaLabs/muninlite">munin-node</a>
systemd unit for system monitoring. And yes, I am old-fashioned.</p>
<p>Also, I am enabling <code>ssh</code> to the underlying Operating System.</p>
<h2 id="Home+Assistant+installation" name="Home+Assistant+installation">Home Assistant installation</h2>
<p>Following the raspberry pi installation <a href="https://www.home-assistant.io/installation/raspberrypi/">guide</a>
is quite straight forward.</p>
<p>I chose to use the HAOS image install as it gives a more <em>consumer device</em>
experience.</p>
<ul>
<li>Download the 64bit image for Raspberry Pi 4 from the
<a href="https://github.com/home-assistant/operating-system/releases">releases page</a></li>
<li>Use the <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2022/X728/OTA">haos-x728.sh</a>
to customize the image to include my X728 files.</li>
<li>Write the modified image to SD card.</li>
</ul>
<p>Boot the raspberry pi from the new SD card and do the GUI installation.
For my case, I needed to login to my router to look-up the IP address.
It is configured via DHCP. I also statically assign an IP address and DNS
name based on MAC address. To make sure that the DNS name and the host name
match, I modified it on the configuration:</p>
<p><code>Settings</code> -> <code>System</code> -> <code>network</code></p>
<p>Modify <code>hostname</code>.</p>
<h2 id="Initial+configuration" name="Initial+configuration">Initial configuration</h2>
<p>Set up home areas. I set-up one area per room, following a naming convention:</p>
<p>F <strong>floor-number</strong> <strong>Room-name</strong></p>
<p>For example:</p>
<ul>
<li><code>F0 Kitchen</code></li>
<li><code>F2 Attic</code></li>
</ul>
<p>Also, create additional areas for:</p>
<ul>
<li><code>External</code> : External items</li>
<li><code>System</code> : System related entities and devices</li>
</ul>
<p>For devices, I am using this naming convention:</p>
<ul>
<li><strong>Room-name</strong> <strong>room-section</strong> <strong>device</strong> <strong>optional</strong></li>
</ul>
<p>The idea is to make it simple to guess the name for voice recognition.</p>
<ul>
<li><strong>Room-name</strong> : this should match the area.</li>
<li><strong>room-section</strong> : <strong>optional</strong>, section of the room this applies to.</li>
<li><strong>device</strong> : device type.
<ul>
<li>Chromecast : google chromecast device</li>
<li>Display : google nest hub</li>
<li>TV : with optional <strong>casting</strong>, <strong>upnp</strong> or <strong>api</strong></li>
<li>Skylight</li>
<li>Light</li>
<li>Switch</li>
<li>Double Switch</li>
<li>Remote: Remote control</li>
<li>Window Sensor</li>
<li>Door Sensor</li>
</ul></li>
<li><strong>optional</strong> : used for when multiple device of the same type are in the same room.</li>
</ul>
<h2 id="Add-Ons" name="Add-Ons">Add-Ons</h2>
<p>I installed the followng add-ons:</p>
<ul>
<li>Home Assistant Community Add-ons
<ul>
<li><a href="https://github.com/hassio-addons/addon-vscode">Studio Code Server</a> : for
editing files. Press <kbd>F1</kbd> and start typing <code>home assistant</code> to view
available integration commands. This is needed because (unfortunately)
not everything can be configured through the UI.</li>
<li><a href="https://github.com/hassio-addons/addon-zwave-js-ui">Z-Wave JS UI</a> : Instead
of <strong>Official add-ons</strong>. The community add-on gives you a control panel with
more detailed control featires. Specifically you can set group associations.</li>
</ul></li>
</ul>
<p>Also, adding my own repository: <a href="https://github.com/iliu-net/hassio-addons">https://github.com/iliu-net/hassio-addons</a></p>
<ul>
<li>rsync-folders : save data and backups to remote server using rsync</li>
<li>watchdogdev : watchdog timer</li>
</ul>
<h2 id="Further+Configuration" name="Further+Configuration">Further Configuration</h2>
<p>These configurations require modifying files. So usually I would do them
<strong>after</strong> installing <code>Studio Code Server</code>.</p>
<h3 id="Modifying+authentications" name="Modifying+authentications">Modifying authentications</h3>
<p>To simplify login in local networks (specially to support physical
control panels) I configured the
<a href="https://www.home-assistant.io/docs/authentication/providers/#trusted-networks">trusted_networks</a>
by adding the following lines to your <code>configuration.yaml</code>.</p>
<pre><code class="language-yaml">homeassistant:
auth_providers:
- type: trusted_networks
trusted_networks:
- 192.168.2.0/24</code></pre>
<p>Essentially what this does is that devices connecting from the
<em>trusted networks</em> do not to login with username/password.</p>
<h3 id="System+temperature" name="System+temperature">System temperature</h3>
<p>For fun I also configured a CPU temperature sensor. Add this
to <code>configuration.yaml</code>:</p>
<pre><code class="language-yaml">sensor:
### command line
- platform: command_line
name: CPU Temperature
command: "cat /sys/class/thermal/thermal_zone0/temp" # RPi
# command: "cat /sys/class/thermal/thermal_zone2/temp" # NUC
# If errors occur, remove degree symbol below
unit_of_measurement: "°C"
value_template: "{{ '%.1f' | format(value | multiply(0.001)) }}" # RPi & NUC
unique_id: sys_cpu_temp</code></pre>
<h2 id="Integrations" name="Integrations">Integrations</h2>
<h3 id="DSMR+Slimme+Meter" name="DSMR+Slimme+Meter">DSMR Slimme Meter</h3>
<p><a href="https://www.home-assistant.io/integrations/dsmr">This</a> integration is to read
energy consumption as provided by NL smart meters. Just make sure that
you get the right cable. I am using a cable from
<a href="https://www.robbshop.nl/slimme-meter-kabel-usb-p1-1-meter">ROBBshop</a>. Just
plug and add to the integrations.</p>
<p>To include:</p>
<ul>
<li><code>Add Integration</code> : <code>DSMR Slimme Meter</code></li>
<li><code>Serial</code> : connection</li>
<li><code>Select device</code>: Select the right serial port (should be easy to identify).</li>
<li><code>DSMR Version</code> : <code>5</code></li>
</ul>
<p>This is very easy to add and configure.</p>
<h3 id="ZigBee+Home+Automation" name="ZigBee+Home+Automation">ZigBee Home Automation</h3>
<p><a href="https://www.home-assistant.io/integrations/zha">This</a> integration is automatically
discovered for supported coordinators. I am using
a <a href="https://phoscon.de/en/conbee2">ConBee II</a> ZigBee coordinator. As long as the
device is supported (see
<a href="https://zigbee.blakadder.com/index.html">compatibility list</a> ) things are fairly
easy and simple.</p>
<p>There are multiple options for ZigBee support. I opted for ZHA because it has
fairly good device support and is easy to use and set-up.</p>
<p>The alternatives are:</p>
<ul>
<li><a href="https://www.zigbee2mqtt.io/">Zibgee2MQTT</a>
<ul>
<li>Good for power users, execellent configuratbility and the best device support</li>
<li>It can be complicated.</li>
</ul></li>
<li><a href="https://dresden-elektronik.github.io/deconz-rest-doc/">deCONZ Add-On</a>
<ul>
<li>Made by the ConBee2 developers. There are no real benefits to using
this integration.</li>
</ul></li>
</ul>
<h3 id="Z-Wave+automation" name="Z-Wave+automation">Z-Wave automation</h3>
<p>For Z-Wave I am using the
<a href="https://www.home-assistant.io/integrations/zwave_js">Z-Wave JS</a>
integration paired with a
<a href="https://aeotec.com/products/aeotec-z-stick-gen5/">Aeotec Z-Stick Gen5</a>.</p>
<p>I am using the
<a href="https://github.com/hassio-addons/addon-zwave-js-ui">Z-Wave JS UI</a>
from Community Add-ons because that gives you a control panel
user interface that is handy when debugging obscure Z-Wave issues and
also has support for creating direct node group associations.</p>
<p>In most day to day situations, I don't really use this control panel, as
most operations can be done from the <a href="https://www.home-assistant.io/">Home Assistant</a> integration
directly.</p>
<p>When adding this, it is <em>not</em> possible to do it from the auto-discovered
<em>z-stick gen5</em> as that will automatically install the core Z-Wave JS add-on.
So just ignore it and use the <code>Add Integration</code> functionality and pick
<code>Z-Wave JS</code> integration from the menu. This will let you skip the
standard add-on installation and let you specify the right Z-Wave JS UI add-on
instead.</p>
<h3 id="Other+integrations" name="Other+integrations">Other integrations</h3>
<ul>
<li><a href="https://www.home-assistant.io/integrations/buienradar">Buienradar</a> : Dutch
weather data. Just add it and it works mostly out of the box.</li>
<li><a href="https://www.home-assistant.io/integrations/dlna_dms">DLNA media servers</a> :
these are discovered automatically as long as they are in the same Subnet.</li>
<li><a href="https://www.home-assistant.io/integrations/cast">Google Cast</a> :
These are also discovered automatically. Used for Android TV devices and
Google Nest speakers/displays.</li>
<li><a href="https://www.home-assistant.io/integrations/ipp">Printer</a> : This was
automatically discovered.</li>
<li><a href="https://www.home-assistant.io/integrations/waze_travel_time">Waze Travel Time</a> :
Show the commute time between two points.</li>
<li><a href="https://www.home-assistant.io/integrations/rdw">Netherlands Vehicle Authority</a> :
Yes, this can be done, but not sure its use. Maybe to remind you that the APK
is due?</li>
<li><a href="https://www.home-assistant.io/integrations/philips_js/">Philips TV</a> :
Installing this integration is quite straight forward. This enables automation
options, but I don't know what to do with it yet. Also has the limitation that
it can't be turned on using the API.</li>
<li><a href="https://www.home-assistant.io/integrations/jellyfin/">Jellyfin</a> :
Add a Jellyfin server as a media source. Note that only a single Jellyfin
can be configured.</li>
</ul>
<h2 id="References" name="References">References</h2>
<ul>
<li><a href="https://github.com/home-assistant/operating-system">https://github.com/home-assistant/operating-system</a></li>
<li><a href="https://developers.home-assistant.io/docs/operating-system/getting-started">https://developers.home-assistant.io/docs/operating-system/getting-started</a></li>
</ul>
Markdown cheat sheet
urn:uuid:68871fe4-8755-c471-09dd-0a255f60a766
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is intended as a quick reference and showcase. For more complete info,
see <a href="http://daringfireball.net/projects/markdown/">John Gruber's original spec</a>
and the <a href="http://github.github.com/github-flavored-markdown/">Github-flavored Markdown info page</a>.</p>
<h2 id="Headers" name="Headers">Headers</h2>
<p>Source:</p>
<hr />
<pre><code class="language-markdown"># H1
## H2
### H3
#### H4
##### H5
###### H6
Alternatively, for H1 and H2, an underline-ish style:
Alt-H1
======
Alt-H2
------</code></pre>
<hr />
<p>Output:</p>
<hr />
<h2 id="H1" name="H1">H1</h2>
<h3 id="H2" name="H2">H2</h3>
<h4 id="H3" name="H3">H3</h4>
<h5 id="H4" name="H4">H4</h5>
<h6 id="H5" name="H5">H5</h6>
<h7 id="H6" name="H6">H6</h7>
<p>Alternatively, for H1 and H2, an underline-ish style:</p>
<h1>Alt-H1</h1>
<h2>Alt-H2</h2>
<hr />
<h2 id="Emphasis" name="Emphasis">Emphasis</h2>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">Emphasis, aka italics, with *asterisks* or _underscores_.
Strong emphasis, aka bold, with **asterisks** or __underscores__.
Combined emphasis with **asterisks and _underscores_**.
Strikethrough uses two tildes. ~~Scratch this.~~</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>Emphasis, aka italics, with <em>asterisks</em> or <em>underscores</em>.</p>
<p>Strong emphasis, aka bold, with <strong>asterisks</strong> or <strong>underscores</strong>.</p>
<p>Combined emphasis with <strong>asterisks and <em>underscores</em></strong>.</p>
<p>Strikethrough uses two tildes. <del>Scratch this.</del></p>
<hr />
<h2 id="Lists" name="Lists">Lists</h2>
<p>(In this example, leading and trailing spaces are shown with with dots: ⋅)</p>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">1. First ordered list item
2. Another item
⋅⋅* Unordered sub-list.
1. Actual numbers don't matter, just that it's a number
⋅⋅1. Ordered sub-list
4. And another item.
⋅⋅⋅You can have properly indented paragraphs within list items. Notice the blank line above, and the leading spaces (at least one, but we'll use three here to also align the raw Markdown).
⋅⋅⋅To have a line break without a paragraph, you will need to use two trailing spaces.⋅⋅
⋅⋅⋅Note that this line is separate, but within the same paragraph.⋅⋅
⋅⋅⋅(This is contrary to the typical GFM line break behaviour, where trailing spaces are not required.)
* Unordered list can use asterisks
- Or minuses
+ Or pluses</code></pre>
<hr />
<p>Output:</p>
<hr />
<ol>
<li>First ordered list item</li>
<li>Another item</li>
</ol>
<ul>
<li>Unordered sub-list.</li>
</ul>
<ol>
<li>Actual numbers don't matter, just that it's a number</li>
<li>Ordered sub-list</li>
<li>
<p>And another item.</p>
<p>You can have properly indented paragraphs within list items. Notice the blank line above, and the leading spaces (at least one, but we'll use three here to also align the raw Markdown).</p>
<p>To have a line break without a paragraph, you will need to use two trailing spaces.
Note that this line is separate, but within the same paragraph.
(This is contrary to the typical GFM line break behaviour, where trailing spaces are not required.)</p>
</li>
</ol>
<ul>
<li>Unordered list can use asterisks</li>
</ul>
<ul>
<li>Or minuses</li>
</ul>
<ul>
<li>Or pluses</li>
</ul>
<hr />
<h2 id="Links" name="Links">Links</h2>
<p>There are two ways to create links.</p>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">[I'm an inline-style link](https://www.google.com)
[I'm an inline-style link with title](https://www.google.com "Google's Homepage")
[I'm a reference-style link][Arbitrary case-insensitive reference text]
[I'm a relative reference to a repository file](../blob/master/LICENSE)
[You can use numbers for reference-style link definitions][1]
Or leave it empty and use the [link text itself].
URLs and URLs in angle brackets will automatically get turned into links.
http://www.example.com or <http://www.example.com> and sometimes
example.com (but not on Github, for example).
Some text to show that the reference links can follow later.
[arbitrary case-insensitive reference text]: https://www.mozilla.org
[1]: http://slashdot.org
[link text itself]: http://www.reddit.com</code></pre>
<hr />
<p>Output:</p>
<hr />
<p><a href="https://www.google.com">I'm an inline-style link</a></p>
<p><a href="https://www.google.com" title="Google's Homepage">I'm an inline-style link with title</a></p>
<p><a href="https://www.mozilla.org">I'm a reference-style link</a></p>
<p><a href="../blob/master/LICENSE">I'm a relative reference to a repository file</a></p>
<p><a href="http://slashdot.org">You can use numbers for reference-style link definitions</a></p>
<p>Or leave it empty and use the <a href="http://www.reddit.com">link text itself</a>.</p>
<p>URLs and URLs in angle brackets will automatically get turned into links.
<a href="http://www.example.com">http://www.example.com</a> or <a href="http://www.example.com">http://www.example.com</a> and sometimes
example.com (but not on Github, for example).</p>
<p>Some text to show that the reference links can follow later.</p>
<hr />
<h2 id="Images" name="Images">Images</h2>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">Here's our logo (hover to see the title text):
Inline-style:
![alt text](https://github.com/adam-p/markdown-here/raw/master/src/common/images/icon48.png "Logo Title Text 1")
Reference-style:
![alt text][logo]
[logo]: https://github.com/adam-p/markdown-here/raw/master/src/common/images/icon48.png "Logo Title Text 2"</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>Here's our logo (hover to see the title text):</p>
<p>Inline-style:
<img src="https://github.com/adam-p/markdown-here/raw/master/src/common/images/icon48.png" alt="alt text" title="Logo Title Text 1" /></p>
<p>Reference-style:
<img src="https://github.com/adam-p/markdown-here/raw/master/src/common/images/icon48.png" alt="alt text" title="Logo Title Text 2" /></p>
<hr />
<h2 id="Code+and+Syntax+Highlighting" name="Code+and+Syntax+Highlighting">Code and Syntax Highlighting</h2>
<p>Code blocks are part of the Markdown spec, but syntax highlighting isn't.
However, many renderers support syntax highlighting. Which languages are supported
and how those language names should be written will vary from renderer to renderer.</p>
<p>To see the complete list, and how to write the language names, see the
<a href="http://softwaremaniacs.org/media/soft/highlight/test.html">highlight.js demo page</a>.</p>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">Inline `code` has `back-ticks around` it.</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>Inline <code>code</code> has <code>back-ticks around</code> it.</p>
<hr />
<p>Blocks of code are either fenced by lines with three back-ticks
"```",
or are indented with four spaces. I recommend only using the fenced code
blocks -- they're easier and only they support syntax highlighting.</p>
<p>Source:</p>
<hr />
<pre lang="no-highlight"><code>```javascript
var s = "JavaScript syntax highlighting";
alert(s);
```
```python
s = "Python syntax highlighting"
print s
```
```
No language indicated, so no syntax highlighting.
But let's throw in a <b>tag</b>.
```
</code></pre>
<hr />
<p>Output:</p>
<pre><code class="language-javascript">var s = "JavaScript syntax highlighting";
alert(s);</code></pre>
<pre><code class="language-python">s = "Python syntax highlighting"
print s</code></pre>
<pre><code>No language indicated, so no syntax highlighting in Markdown Here (varies on Github).
But let's throw in a <b>tag</b>.</code></pre>
<hr />
<h2 id="Tables" name="Tables">Tables</h2>
<p>Tables aren't part of the core Markdown spec, but they are part of GFM. They
are an easy way of adding tables to your email -- a task that would otherwise
require copy-pasting from another application.</p>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">Colons can be used to align columns.
| Tables | Are | Cool |
| ------------- |:-------------:| -----:|
| col 3 is | right-aligned | $1600 |
| col 2 is | centered | $12 |
| zebra stripes | are neat | $1 |
There must be at least 3 dashes separating each header cell.
The outer pipes (|) are optional, and you don't need to make the
raw Markdown line up prettily. You can also use inline Markdown.
Markdown | Less | Pretty
--- | --- | ---
*Still* | `renders` | **nicely**
1 | 2 | 3</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>Colons can be used to align columns.</p>
<table>
<thead>
<tr>
<th>Tables</th>
<th style="text-align: center;">Are</th>
<th style="text-align: right;">Cool</th>
</tr>
</thead>
<tbody>
<tr>
<td>col 3 is</td>
<td style="text-align: center;">right-aligned</td>
<td style="text-align: right;">$1600</td>
</tr>
<tr>
<td>col 2 is</td>
<td style="text-align: center;">centered</td>
<td style="text-align: right;">$12</td>
</tr>
<tr>
<td>zebra stripes</td>
<td style="text-align: center;">are neat</td>
<td style="text-align: right;">$1</td>
</tr>
</tbody>
</table>
<p>There must be at least 3 dashes separating each header cell. The outer pipes (|)
are optional, and you don't need to make the raw Markdown line up prettily. You
can also use inline Markdown.</p>
<table>
<thead>
<tr>
<th>Markdown</th>
<th>Less</th>
<th>Pretty</th>
</tr>
</thead>
<tbody>
<tr>
<td><em>Still</em></td>
<td><code>renders</code></td>
<td><strong>nicely</strong></td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
</tbody>
</table>
<hr />
<h2 id="Blockquotes" name="Blockquotes">Blockquotes</h2>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">> Blockquotes are very handy in email to emulate reply text.
> This line is part of the same quote.
Quote break.
> This is a very long line that will still be quoted properly when it wraps. Oh boy let's keep writing to make sure this is long enough to actually wrap for everyone. Oh, you can *put* **Markdown** into a blockquote.</code></pre>
<hr />
<p>Output:</p>
<hr />
<blockquote>
<p>Blockquotes are very handy in email to emulate reply text.
This line is part of the same quote.</p>
</blockquote>
<p>Quote break.</p>
<blockquote>
<p>This is a very long line that will still be quoted properly when it wraps. Oh boy let's keep writing to make sure this is long enough to actually wrap for everyone. Oh, you can <em>put</em> <strong>Markdown</strong> into a blockquote.</p>
</blockquote>
<hr />
<h2 id="Inline+HTML" name="Inline+HTML">Inline HTML</h2>
<p>You can also use raw HTML in your Markdown, and it'll mostly work pretty well.</p>
<p>Source:</p>
<hr />
<pre><code class="language-markdown"><dl>
<dt>Definition list</dt>
<dd>Is something people use sometimes.</dd>
<dt>Markdown in HTML</dt>
<dd>Does *not* work **very** well. Use HTML <em>tags</em>.</dd>
</dl></code></pre>
<hr />
<p>Output:</p>
<hr />
<dl>
<dt>Definition list</dt>
<dd>Is something people use sometimes.</dd>
<dt>Markdown in HTML</dt>
<dd>Does *not* work **very** well. Use HTML <em>tags</em>.</dd>
</dl>
<hr />
<h2 id="Horizontal+Rule" name="Horizontal+Rule">Horizontal Rule</h2>
<p>Source:</p>
<hr />
<pre><code>Three or more...
---
Hyphens
***
Asterisks
___
Underscores</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>Three or more...</p>
<hr />
<p>Hyphens</p>
<hr />
<p>Asterisks</p>
<hr />
<p>Underscores</p>
<hr />
<h2 id="Line+Breaks" name="Line+Breaks">Line Breaks</h2>
<p>My basic recommendation for learning how line breaks work is to experiment and
discover -- hit <Enter> once (i.e., insert one newline), then hit it twice
(i.e., insert two newlines), see what happens. You'll soon learn to get what you
want. "Markdown Toggle" is your friend.</p>
<p>Here are some things to try out:</p>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">Here's a line for us to start with.
This line is separated from the one above by two newlines, so it will be a *separate paragraph*.
This line is also a separate paragraph, but...
This line is only separated by a single newline, so it's a separate line in the *same paragraph*.</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>Here's a line for us to start with.</p>
<p>This line is separated from the one above by two newlines, so it will be a <em>separate paragraph</em>.</p>
<p>This line is also begins a separate paragraph, but...
This line is only separated by a single newline, so it's a separate line in the <em>same paragraph</em>.</p>
<p>(Technical note: <em>Markdown Here</em> uses GFM line breaks, so there's no need to use MD's two-space line breaks.)</p>
<hr />
<h2 id="Local+extensions" name="Local+extensions">Local extensions</h2>
<p>Source:</p>
<hr />
<pre><code class="language-markdown">- [x] ticked checkboxes
- [ ] unticked check box
- Use ++insert++ text
- This is ^^superscript^^ stuff.
- This is ,,subscript,, stuff.
- This is ~~striketrough~~ text.
- This is ??marked?? text.
</code></pre>
<hr />
<p>Output:</p>
<hr />
<ul>
<li><input type="checkbox" disabled checked> ticked checkboxes</li>
<li><input type="checkbox" disabled > unticked check box</li>
<li>Use <ins>insert</ins> text</li>
<li>This is <sup>superscript</sup> stuff.</li>
<li>This is <sub>subscript</sub> stuff.</li>
<li>This is <del>striketrough</del> text.</li>
<li>This is <mark>marked</mark> text.</li>
</ul>
<hr />
<h2 id="Diagrams" name="Diagrams">Diagrams</h2>
<p>Source:</p>
<hr />
<pre lang="no-highlight"><code>dot {
graph NET {
layout=neato
edge [weight=2.0 fontsize=7]
node [style=filled shape=box]
node [fillcolor=white] kpnmodem
node [fillcolor=lightblue] ngs1 ngs2 ngs3
node [fillcolor=lightgreen] cctv_sw
node [fillcolor=silver] cn4 iptv1 veraedge1 nd2 nd3 philtv
node [fillcolor=yellow] wac1 wac2 wac3 owap1
kpnmodem -- cn4 [label="v2" taillabel="p1" headlabel="p2"]
kpnmodem -- iptv1 [label="v2 (p#7)" taillabel="p2"]
kpnmodem -- veraedge1 [label="v2" taillabel="p4"]
ngs1 -- cn4 [label="v1,3" taillabel="p10" headlabel="p0"]
ngs1 -- ngs2 [label="v1,3 (p#4)" taillabel="p1" headlabel="p1"]
ngs1 -- ngs3 [label="v1,3" taillabel="p8,9" headlabel="p1,8" penwidth=2.0]
ngs1 -- cctv_sw [label="v3" taillabel="p7" headlabel="p5"]
ngs3 -- nd2 [label="v1,3" taillabel="p7"]
ngs3 -- nd3 [label="v1,3" taillabel="p8"]
cctv_sw -- ipcam1 [label="v3" taillabel="p1"]
cctv_sw -- ipcam2 [label="v3 (p#1)" taillabel="p2"]
cctv_sw -- ipcam3 [label="v3 (p#5)" taillabel="p3"]
}
}
\```aafigure {"foreground": "#ff0000"}
+-----+ ^
| | |
--->+ +---o--->
| | |
+-----+ V
```
</code></pre>
<hr />
<p>Output:</p>
<hr />
<p>dot {
graph NET {
layout=neato</p>
<pre><code>edge [weight=2.0 fontsize=7]
node [style=filled shape=box]
node [fillcolor=white] kpnmodem
node [fillcolor=lightblue] ngs1 ngs2 ngs3
node [fillcolor=lightgreen] cctv_sw
node [fillcolor=silver] cn4 iptv1 veraedge1 nd2 nd3 philtv
node [fillcolor=yellow] wac1 wac2 wac3 owap1
kpnmodem -- cn4 [label="v2" taillabel="p1" headlabel="p2"]
kpnmodem -- iptv1 [label="v2 (p#7)" taillabel="p2"]
kpnmodem -- veraedge1 [label="v2" taillabel="p4"]
ngs1 -- cn4 [label="v1,3" taillabel="p10" headlabel="p0"]
ngs1 -- ngs2 [label="v1,3 (p#4)" taillabel="p1" headlabel="p1"]
ngs1 -- ngs3 [label="v1,3" taillabel="p8,9" headlabel="p1,8" penwidth=2.0]
ngs1 -- cctv_sw [label="v3" taillabel="p7" headlabel="p5"]
ngs3 -- nd2 [label="v1,3" taillabel="p7"]
ngs3 -- nd3 [label="v1,3" taillabel="p8"]
cctv_sw -- ipcam1 [label="v3" taillabel="p1"]
cctv_sw -- ipcam2 [label="v3 (p#1)" taillabel="p2"]
cctv_sw -- ipcam3 [label="v3 (p#5)" taillabel="p3"]</code></pre>
<p>}
}</p>
<div><svg class="bob" font-family="arial" font-size="14" height="80" width="168" xmlns="http://www.w3.org/2000/svg">
<defs>
<marker id="triangle" markerHeight="10" markerUnits="strokeWidth" markerWidth="10" orient="auto" refX="15" refY="10" viewBox="0 0 50 20">
<path d="M 0 0 L 30 10 L 0 20 z"/>
</marker>
</defs>
<style>
line, path {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle.solid {
fill:black;
}
circle.open {
fill:transparent;
}
tspan.head{
fill: none;
stroke: none;
}
</style>
<path d=" M 52 8 L 56 8 M 52 8 L 52 16 M 56 8 L 64 8 M 56 8 L 64 8 L 72 8 M 64 8 L 72 8 L 80 8 M 72 8 L 80 8 L 88 8 M 80 8 L 88 8 L 96 8 M 88 8 L 96 8 M 100 8 L 96 8 M 100 8 L 100 16 M 52 16 L 52 32 M 52 16 L 52 32 M 100 16 L 100 32 M 100 16 L 100 32 M 132 16 L 132 32 M 132 16 L 132 32 M 16 40 L 24 40 M 16 40 L 24 40 L 32 40 M 24 40 L 32 40 M 52 40 L 52 32 M 52 40 L 48 40 M 52 40 L 52 48 M 100 40 L 100 32 M 100 40 L 104 40 M 100 40 L 100 48 M 104 40 L 112 40 M 104 40 L 112 40 L 120 40 M 112 40 L 120 40 L 128 40 M 120 40 L 128 40 M 132 36 L 132 32 M 132 44 L 132 48 M 136 40 L 144 40 M 136 40 L 144 40 L 152 40 M 144 40 L 152 40 M 52 48 L 52 64 M 52 48 L 52 64 M 100 48 L 100 64 M 100 48 L 100 64 M 52 72 L 52 64 M 52 72 L 56 72 L 64 72 M 56 72 L 64 72 L 72 72 M 64 72 L 72 72 L 80 72 M 72 72 L 80 72 L 88 72 M 80 72 L 88 72 L 96 72 M 88 72 L 96 72 M 100 72 L 100 64 M 100 72 L 96 72" fill="none"/>
<path d="" fill="none" stroke-dasharray="3 3"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="16" y2="4"/>
<line marker-end="url(#triangle)" x1="32" x2="44" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="32" x2="44" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="40" x2="44" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="152" x2="164" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="152" x2="164" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="160" x2="164" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="48" y2="76"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="48" y2="76"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="64" y2="76"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="132" cy="40" r="4"/>
</svg>
</div>
<hr />
<h2 id="Others" name="Others">Others</h2>
<ul>
<li><code>#++</code> and <code>#--</code> for headown</li>
<li><code>$include: file.md $</code></li>
</ul>
Nanowiki
urn:uuid:25c4d0c7-223f-d749-d1a9-ac58f0833184
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="https://github.com/iliu-net/nanowiki">NanoWiki</a> is a Wiki implementation based on <a href="https://github.com/luckyshot/picowiki">picowiki</a>.</p>
<p>I have been using <a href="https://simplenote.com/">SimpleNote</a> for a number of years. It works pretty well
but somehow I was looking for:</p>
<ul>
<li>Ability to include and render nice asciiart pictures</li>
<li>Organizes articles in a folder structure.</li>
</ul>
<p>So I was looking for a Wiki package that could either do this or be extended
to do this.</p>
<p><img src="https://github.com/iliu-net/nanowiki/raw/main/static/screenshot.png" alt="screenshot" /></p>
<p>Other features that I was looking for:</p>
<ul>
<li>Use of <a href="https://daringfireball.net/projects/markdown/">markdown</a> for markup, and be able to tweak the format as needed.</li>
<li>Editor that would <em>syntax highlight</em> the markdown syntax</li>
<li>Store data as simple files</li>
<li>Written in a language I am familiar with.</li>
<li>software generated network graphs (graphviz)</li>
</ul>
<p>So, after looking at a number of packages, I opted for one that allthough
did not have all features, but it was small enough and easily extendable.</p>
<p><a href="https://github.com/luckyshot/picowiki">PicoWiki</a> is a very small Wiki implementation with a <em>plugin</em>
architecture, so it is quite easy to extend. The downside of this
is that the functionality in <a href="https://github.com/luckyshot/picowiki">PicoWiki</a> is quite limited. So
I added the following features:</p>
<ul>
<li>file management: create, delete, rename, modify, attach, etc.</li>
<li>hooks for access control</li>
<li>meta data support</li>
<li>Disabled code execution. This can be considered a <em>"security"</em> feature.</li>
<li>Support for byte ranges. This lets you stream video files directly
from the wiki.</li>
<li>toggable, folder or document views.</li>
<li>theme support</li>
<li>Multiple file type handling</li>
</ul>
<p>The default installations has the following plugins:</p>
<ul>
<li>Emoji : Render emojis</li>
<li>HTML : HTML content handler</li>
<li>MarkDown : Markdown content handler</li>
<li>Includes : Include Wiki documents in another</li>
<li>Vars : Expand variables. Either from document metadata or from the NanoWiki config file.</li>
<li>WikiLinks : short hand for wiki links.</li>
</ul>
X728 kit for Raspberry Pi 4
urn:uuid:90557427-7030-f41c-50ed-ec0b015336c5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>As part of my small project of movng my Z-Wave Hub to a Raspberry PI, I got an
<a href="http://wiki.geekworm.com/X728">X728</a> kit. This has:</p>
<ul>
<li>UPS controller board
<ul>
<li>RTC circuit</li>
<li>Battery and Power control board</li>
</ul></li>
<li>Case
<ul>
<li>Button</li>
<li>Cooling fan</li>
<li>Additional Battery holder</li>
</ul></li>
</ul>
<p>The case has holes for wall-mounting.</p>
<p>The Geekworm X728 kit is very easy to build. There is a video to show how to do
this:</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=QOG30LXb6ds&t">Build video</a></li>
</ul>
<p>Otherwise, refer to the hardware guide <a href="http://wiki.geekworm.com/X728-hardware">here</a>.</p>
<p>In my case, before the build, I took the disassembled case to measure the holes
needed for wall-mounting the case. You need fairly small screws for this. I
actually had to bend the case slighly for my screws to work.</p>
<p>Also, I set the jumper to automatic Power-on and a few cable ties to fix a USB Hub
to the case.</p>
<p>To test the hardware I downloaded a 64-bit Raspberry OS Lite image from
<a href="https://www.raspberrypi.com/software/operating-systems/">Raspberrypi.com</a> and
image an micro-SD card.</p>
<ul>
<li>Boot the Raspberry OS. The first boot will resize the filesystems, so please
wait. Also, it will let you configure the default user and password.</li>
<li>Enable the i2c function:
<ul>
<li><code>sudo raspi-config</code></li>
<li>Go to <code>Interfacing Options</code> -> <code>I2C - Enable/Disable automatic loading</code>.</li>
<li>While you are at-it, you may also enable <code>SSH</code>.</li>
</ul></li>
<li>Alternatively, you can a manual install by:
<ul>
<li>Modify the <code>config.txt</code> in the <code>/boot</code> partition:</li>
<li>Add at the end:</li>
<li><code>[all]</code></li>
<li><code>dtparam=i2c_arm=on</code></li>
</ul></li>
<li>Install pre-requisites:
<ul>
<li><code>sudo apt-get update</code></li>
<li><code>sudo apt-get upgrade</code></li>
<li><code>sudo apt-get -y install i2c-tools</code></li>
<li>This is only needed for <code>i2cdetect</code>.</li>
</ul></li>
<li>Reboot the system.</li>
<li>Check if the hardware is detected:
<ul>
<li><code>sudo i2cdetect -y 1</code></li>
<li><img src="https://raw.githubusercontent.com/alejandroliu/0ink.net/main/snippets/2022/X728/imgs/X728x-i2c.png" alt="screenshot" /></li>
<li><code>#36</code> - the address of the battery fuel gauging chip</li>
<li><code>#68</code> - the address of the RTC chip</li>
<li>Different x728 versions may have different values. Mine used these values.</li>
</ul></li>
</ul>
<p>I personally did not like the example software. This can be found in <a href="https://github.com/geekworm-com/x728">github</a>.
Specifically, the <code>shutdown</code> functionality seemed to have <a href="https://en.wikipedia.org/wiki/Race_condition">race-conditions</a>.</p>
<p>However, using it is not that complicated. So I wrote my own software, but you
can do your own thing:</p>
<h2 id="RTC+functionality" name="RTC+functionality">RTC functionality</h2>
<p>The RTC functionality is supported by the Raspberry OS kernel. You need to enable
the <code>i2c</code> functionality in <code>/boot/config.txt</code> by adding the line:</p>
<pre><code class="language-config">dtparam=i2c_arm=on</code></pre>
<p>With that enabled, you need to add the kernel modules:</p>
<pre><code>i2c-dev
rtc-ds1307</code></pre>
<p>You need to enable it in the bus:</p>
<pre><code class="language-bash">echo ds1307 0x68 > /sys/class/i2c-adapter/i2c-1/new_device</code></pre>
<p>From then on, you can use the standard <code>hwclock</code> command.</p>
<p>These can be added to <code>rc.local</code> which is how the sample code does. I prefer to
do this from a script run from <code>systemd</code> unit file:</p>
<pre><code class="language-conf"># file: /etc/systemd/system/x728clock.service
[Unit]
Description=Restore / save X728 clock
DefaultDependencies=no
Before=sysinit.target shutdown.target
Conflicts=shutdown.target
[Service]
ExecStart=/etc/x728/clock.sh start
ExecStop=/etc/x728/clock.sh stop
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=sysinit.target
</code></pre>
<p>After saving this file and creating a <code>/etc/x728/clock.sh</code> script you can:</p>
<pre><code>systemctl daemon-reload
systemctl enable x728clock
systemctl start x728clock
systemctl disable fake-hwclock
systemctl stop fake-hwclock</code></pre>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/x728clock.service">systemd unit file</a></li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/scripts/clock.sh">script</a></li>
</ul>
<h2 id="GPIO+assignments" name="GPIO+assignments">GPIO assignments</h2>
<table>
<thead>
<tr>
<th>Pin</th>
<th>Function</th>
<th>Direction</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>#6</td>
<td>PLD</td>
<td>in</td>
<td>1: A/C lost, 0: A/C OK</td>
</tr>
<tr>
<td>#5</td>
<td>5hutdown</td>
<td>in</td>
<td>Sense button press</td>
</tr>
<tr>
<td>#12</td>
<td>Boot</td>
<td>out</td>
<td>Control SW/HW controlled button</td>
</tr>
<tr>
<td>#20</td>
<td>Buzzer</td>
<td>out</td>
<td></td>
</tr>
<tr>
<td>#26</td>
<td>Button</td>
<td>out</td>
<td>Simulate power button press</td>
</tr>
</tbody>
</table>
<ul>
<li>Reading <code>PLD</code> detects if the A/C power is available or not. If it reads <code>1</code>
A/C power was lost. <code>0</code> if A/C power is available.</li>
<li><code>Buzzer</code> if set to <code>1</code> it will sound a rather loud beep. <code>0</code> for off.</li>
<li><code>Button</code> simulates pressing the hardware <code>Off</code> button. If you set <code>Button</code>
to <code>1</code> for 6 seconds, the system will poweroff.</li>
<li><code>Shutdown</code> is used to read the status of the hardware button. This works only
if <code>Boot</code> is set to <code>1</code>. Otherwise, <code>Shutdown</code> doesn't seem to work.
When <code>Boot</code> is set to <code>1</code>, it would read <code>1</code> if pressed, <code>0</code> if released.
Weirdly enough, the <code>Shutdown</code> button is not very sensitive. It takes about
3 seconds to register the button press. The button release takes bout 50
seconds to detect.</li>
</ul>
<h2 id="GPIO+programming" name="GPIO+programming">GPIO programming</h2>
<p>Programming GPIO is quite easy. It can be done from shell scripting using the
sys file-system.</p>
<pre><code class="language-bash">gpioIO() {
local pin=$1
if [ $# -eq 1 ] ; then
cat /sys/class/gpio/gpio$pin/value
else
echo "$2" > /sys/class/gpio/gpio$pin/value
fi
}
gpioInit() {
local name="$1" pin="$2" dir="$3"
[ ! -d /sys/class/gpio/gpio$pin ] && echo "$pin" > /sys/class/gpio/export
echo $dir > /sys/class/gpio/gpio$pin/direction
eval "gpio${name}() { gpioIO $pin \"\$@\" ; }"
}
ticks() {
echo $(date +%s)$(date +%N | cut -c-2)
}
beep() {
local len="$1" ; shift
gpioBUZZER 1
sleep "$len"
gpioBUZZER 0
[ $# -eq 0 ] && return
local repeat="$1" idle
[ $# -gt 1 ] && idle="$2" || idle="$len"
while [ $repeat -gt 1 ]
do
repeat=$(expr $repeat - 1)
sleep "$idle"
gpioBUZZER 1
sleep "$len"
gpioBUZZER 0
done
}
gpioInit SHUTDOWN 5 in
gpioInit PLD 6 in
gpioInit BOOT 12 out
gpioInit BUZZER 20 out
gpioInit BUTTON 26 out
</code></pre>
<p>Afterwards, you can just:</p>
<ul>
<li><code>gpio[PIN]</code> to read, i.e:
<ul>
<li><code>gpioSHUTDOWN</code></li>
<li><code>gpioPLD</code></li>
</ul></li>
<li><code>gpio[PIN] {1|0}</code> to write i.e.:
<ul>
<li><code>gpioBOOT 1</code></li>
<li><code>gpioBUZER 0</code></li>
</ul></li>
</ul>
<h2 id="Reading+Battery+status" name="Reading+Battery+status">Reading Battery status</h2>
<p>You can read battery voltage and battery charge from the <code>smbus</code>. To read the <code>smbus</code>
I am using an example from [rpi-examples]((<a href="https://github.com/leon-anavi/rpi-examples/tree/master/BMP180/c">https://github.com/leon-anavi/rpi-examples/tree/master/BMP180/c</a>).
Specifically, I am only using the files <code>smbus.c</code> and <code>smbus.h</code> from that repository.</p>
<p>The code outline is:</p>
<ul>
<li>open <code>/dev/i2c-1</code> in read/write mode.</li>
<li><code>ioctl(fd, I2C_SLAVE, I2C_ADDRESS)</code> where <code>I2C_ADDRESS = 0x36</code>.</li>
<li>From <code>smbus.c</code> read <code>i2c_smbus_read_word_data(fd, address)</code>, and byte swap.</li>
<li>Voltage can be read from address <code>2</code>:
<ul>
<li><code>Voltage = (swapped) * 1.25 / 1000 / 16</code></li>
</ul></li>
<li>Battery charge can be read from address <code>4</code>:
<ul>
<li><code>Battery = (swapped) / 256</code></li>
</ul></li>
</ul>
<p>The code to do this can be found on <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2022/X728/src">github</a>.</p>
<p>A precompiled 64bit static binary can be found there too.</p>
<h2 id="Power+down" name="Power+down">Power down</h2>
<p>The <a href="http://wiki.geekworm.com/X728">x728</a> will turn off power by holding down the power button for around
6 seconds. Doing this will skip the <code>shutdown</code> process. Also, if you execute the
<code>poweroff</code> command, the Raspberry Pi will shutdown but power will not go <strong>OFF</strong> until
you hold the power button for 6 seconds.</p>
<p>For this to work properly, I am adding this small script to <code>/lib/systemd/system-shutdown/gpio-poweroff</code>:</p>
<pre><code class="language-bash">#!/bin/sh
#
# file: /lib/systemd/system-shutdown/gpio-poweroff
# $1 will be either "halt", "poweroff", "reboot" or "kexec"
#
BUTTON=26
op_poweroff() {
echo $BUTTON > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio$BUTTON/direction
echo 1 > /sys/class/gpio/gpio$BUTTON/value
sync;sync;sync
sleep 7
echo 0 > /sys/class/gpio/gpio$BUTTON/value
sleep 3
}
case "$1" in
poweroff) op_poweroff ;;
esac
</code></pre>
<p>This hooks into <code>systemd</code>'s <code>shutdown</code> target and uses the <code>BUTTON</code> pin to
simulate holding the button for 6 seconds to force the UPS board to power off.</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/gpio-poweroff">script</a></li>
</ul>
<h2 id="UPS+management" name="UPS+management">UPS management</h2>
<p>In addition, I wrote a small script to:</p>
<ul>
<li>graceful shutdown when power button is pressed.
<ul>
<li>hold the power button, after approximately 3 seconds, you will hear
2 beeps. YOu can release the power button then. The system will
do a graceful shutdown and power down.</li>
</ul></li>
<li>When A/C power is lost:
<ul>
<li>If battery status can not be determined, the system will do a graceful
powerdown.</li>
<li>If battery status can be read, it will beep once every 60 seconds until
power is restored.</li>
<li>If battery is low, it will do a graceful powerdown.</li>
</ul></li>
<li>Events are written to /dev/kmsg, so they could be forwarded to a syslog server.</li>
</ul>
<p>The files to do this are:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/x728ups.service">systemd unit file</a></li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/scripts/upsmon.sh">upsmon.sh</a></li>
</ul>
<h2 id="Home+Assistant" name="Home+Assistant">Home Assistant</h2>
<p>I am using the <a href="http://wiki.geekworm.com/X728">X728 kit</a> for creating a <a href="https://www.home-assistant.io/">Home Assistant</a> installation.
<a href="https://www.home-assistant.io/">Home Assistant</a> has a "managed Operating System" called
<a href="https://github.com/home-assistant/operating-system">Home Assistant OS</a> which is a mostly read-only installation. This
makes it complicated to add your own "low-level" customizations. To include
these scripts and making them persistant accross upgrades I am hooking into the
<a href="https://rauc.io/">RAUC OTA</a> upgrade subsystem.</p>
<p>For that, I hook up to the <a href="https://rauc.readthedocs.io/en/latest/using.html#system-based-customization-handlers">System handlers</a>
which makes use of a <a href="https://rauc.readthedocs.io/en/latest/reference.html#sec-handler-interface">Handler Interface</a>.</p>
<p>With these scripts, I am able to move the customizations from a previous image
to the new upgraded image.</p>
<p>For this, I have a script <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/OTA/haos-x728.sh">haos-x728</a>
that injects the customizations
into a new installation image. This script also modifies <code>/etc/rauc/system.conf</code>
so that the customization handler is called during an upgrade.</p>
<p>The <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/OTA/post-install">post-install handler</a>
re-adds the handler to <code>/etc/rauc/system.conf</code>
and copies the necessary files to the updated image.</p>
<p>The customization scripts are <em>not</em> <a href="http://wiki.geekworm.com/X728">X728</a> specific, and essentially lets
you copy all the files in a directory to the custom image. As such, I am using
to not only inject these <a href="http://wiki.geekworm.com/X728">X728</a> scripts, but also the <code>vcgencmd</code> and a
<code>muninlite</code> agent. Also, the dependant binaries for the
<a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/OTA/post-install">post-install handler</a>
are injected in the same way.</p>
<p>For the <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/OTA/post-install">post-install handler</a>
to work properly you need to copy binaries for:</p>
<ul>
<li><code>gensquashfs</code></li>
<li><code>sqfs2tar</code></li>
</ul>
<p>And dependant shared libraries (that are not part of the <a href="https://github.com/home-assistant/operating-system">Home Assistant OS</a>
image:</p>
<ul>
<li><code>liblz4.so.1</code></li>
<li><code>liblz4.so.1.9.3</code></li>
<li><code>liblzma.so.5</code></li>
<li><code>liblzma.so.5.2.5</code></li>
<li><code>liblzo2.so.2</code></li>
<li><code>liblzo2.so.2.0.0</code></li>
<li><code>libselinux.so.1</code></li>
<li><code>libsquashfs.so.1</code></li>
<li><code>libsquashfs.so.1.1.0</code></li>
<li><code>libzstd.so.1</code></li>
<li><code>libzstd.so.1.4.8</code></li>
</ul>
<p>The simplest way to get these is to install <code>squashfs-tools-ng</code> on standard
<code>Raspberry PI OS</code> and copy those files from there.</p>
<p>The full set of customization files that I am using can be found
<a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/X728/OTA/x728rootfs.tar.gz">here</a>.</p>
<p>It contains:</p>
<p>RAUC handler:</p>
<ul>
<li><code>lib/rauc/post-install</code></li>
</ul>
<p><code>squashfs-tools-ng</code> (dependancy to RAUC handler)</p>
<ul>
<li><code>bin/gensquashfs</code></li>
<li><code>bin/sqfs2tar</code></li>
<li><code>lib/liblz4.so.1</code></li>
<li><code>lib/liblz4.so.1.9.3</code></li>
<li><code>lib/liblzma.so.5</code></li>
<li><code>lib/liblzma.so.5.2.5</code></li>
<li><code>lib/liblzo2.so.2</code></li>
<li><code>lib/liblzo2.so.2.0.0</code></li>
<li><code>lib/libselinux.so.1</code></li>
<li><code>lib/libsquashfs.so.1</code></li>
<li><code>lib/libsquashfs.so.1.1.0</code></li>
<li><code>lib/libzstd.so.1</code></li>
<li><code>lib/libzstd.so.1.4.8</code></li>
</ul>
<p>Actual <a href="http://wiki.geekworm.com/X728">X728</a> support scripts.</p>
<ul>
<li><code>bin/x728batt</code></li>
<li><code>etc/x728/clock.sh</code></li>
<li><code>etc/x728/upsmon.sh</code></li>
<li><code>etc/systemd/system/x728clock.service</code></li>
<li><code>etc/systemd/system/x728ups.service</code></li>
<li><code>etc/systemd/system/sysinit.target.wants/x728clock.service</code></li>
<li><code>etc/systemd/system/multi-user.target.wants/x728ups.service</code></li>
<li><code>lib/systemd/system-shutdown/gpio-poweroff</code></li>
</ul>
<p>Munin node:</p>
<ul>
<li><code>bin/munin-node</code></li>
<li><code>etc/systemd/system/sockets.target.wants/munin-node.socket</code></li>
<li><code>etc/systemd/system/munin-node.socket</code></li>
<li><code>etc/systemd/system/munin-node@.service</code></li>
<li><code>etc/muninlite.conf</code></li>
</ul>
cuylib
urn:uuid:be57f2ce-6906-558d-7480-de0e71817638
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a tiny library to implement Web server embedded editor.</p>
<p>You can find it in <a href="https://github.com/alejandroliu/0ink.net/tree/main/snippets/2022/cuylib">github</a>.</p>
<p>Can be used either from <a href="http://haserl.sourceforge.net/">haserl</a> or directly from
a <code>shell</code> script.</p>
<p>Features:</p>
<ul>
<li>Uses <a href="https://codemirror.net/">codemirror</a></li>
<li>Escaped HTML entities (html_enc)</li>
<li>Decode URL escaping (url_decode)</li>
<li>Read POST form data (post_data)</li>
<li>Parse <code>QUERY_STRING</code> (query_string and query_string_raw)</li>
<li>Render HTML and Markdown documents with pre-processing (cuy_render)</li>
</ul>
<p>Support functions:</p>
<ul>
<li>codemirror_link : Configured URL to where you can find codemirror files.</li>
<li>html_msg : generate a HTML response</li>
<li>html_enc : Encode special HTML characters</li>
<li>url_decode : Decode URL encoded strings</li>
<li>post_data : Read data posted using a HTML POST request.</li>
<li>query_string_raw : parse HTML query parameters.</li>
<li>query_string : part HTML query paramters and also decodes URL encoding.</li>
</ul>
<p>Editor components:</p>
<ul>
<li>cuy_header : snippet of code for the html document header.</li>
<li>cuy_editform : snippet of code for genereting the html editor form</li>
<li>cuy_editarea : snippet of code to bind codemirror to a text area</li>
<li>cuy_savecb : snippet for save command callback</li>
<li>cuy_render : convert content into suitable HTML markup</li>
</ul>
<p>Main editing entry point</p>
<ul>
<li>cuy_editapp : A full editing page</li>
</ul>
Photoprism
urn:uuid:3b1a2d85-263e-a102-7a86-024a5fc8fb47
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="https://photoprism.app/">photoprism</a> is a web based photo management application.</p>
<p>From its website:</p>
<blockquote>
<p>PhotoPrism® is an AI-Powered Photos App for the Decentralized Web.
It makes use of the latest technologies to tag and find pictures
automatically without getting in your way. You can run it at home,
on a private server, or in the cloud.</p>
</blockquote>
<p><img src="https://docs.photoprism.app/img/preview.jpg" alt="photoprism preview" /></p>
<p>Features:</p>
<ul>
<li>Browse all your photos and videos without worrying about RAW conversion,
duplicates or video formats</li>
<li>search: Easily find specific pictures using powerful search filters</li>
<li>places: Includes four high-resolution world maps to bring back the
memories of your favorite trips</li>
<li>Play Live Photos™ by hovering over them in albums and search results</li>
<li>people: Recognizes the faces of your family and friends</li>
<li>Automatic classification of pictures based on their content and location</li>
</ul>
<p>What I found is that it does most things automatically.</p>
<p>My implementation:</p>
<ul>
<li>I use a docker instance for <a href="https://photoprism.app/">photoprism</a> linked to a <code>mysql</code> database.
(actually <code>mariadb</code>).
<ul>
<li><a href="https://docs.photoprism.app/getting-started/advanced/databases/">database setup</a></li>
</ul></li>
<li>This <a href="https://photoprism.app/">photoprism</a> instance is set as <code>AUTH=public</code>, so no authentication.
Users on my home network can connect directly. From the Internet, I am using an
<a href="https://nginx.org/">nginx</a> reverse proxy. This reverse proxy requires authentication.</li>
<li>For photo sharing, a different <a href="https://photoprism.app/">photoprism</a> instance is used pointing to the same
file system and mysql as the main instance. This instance however has <code>AUTH=password</code>
enabled, so only <code>shared</code> links can be visited.</li>
<li>For uploading photos from my iPhone, I am using <a href="https://link.photoprism.app/photosync">PhotoSync</a>. This can be
configured to upload directly to <a href="https://photoprism.app/">photoprism</a> using WebDav. (See instructions for
<a href="https://docs.photoprism.app/user-guide/sync/mobile-devices/">syncing with mobile devices</a>.)
On the other hand, it is complicated for me due to permission problems in my home
network. For that reason, I am using a small container running a <code>sshd</code> daemon
and I sync to that small container using <code>sftp</code> protocol.</li>
</ul>
<p>Some issues I found:</p>
<ul>
<li>face recognition is a bit wonky. Specially for kid's faces.</li>
<li>some tweaks may be needed to get things to display just right.</li>
<li>running using the embedded <code>sqlite</code> database won't scale beyond a handful of
photos.</li>
<li>I have a photo library of 400GB. It took a long time to index.</li>
</ul>
<p>Before I would copy files from my camera to my server. In the case of
video files, a re-encoding step was needed. With todays smartphones
this is not needed. The smartphone can sync directly to the server
and files are already compressed with a good enough CODEC.</p>
SupervisorUI MF
urn:uuid:40ae0602-1d4b-729b-9290-5e692e12865c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In a previous <a href="/posts/2022/2022-08-25-supervisorui-redone.html">article</a>, I updated a <a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> project to work for me.</p>
<p>This updated version <a href="https://github.com/TortugaLabs/supervisorui-redone">supervisorui-redone</a> is essentially a PHP application
which is a different approach from the original <a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> project
which was more of a JavaScript application with some helper functionality implemented
in PHP.</p>
<p>As such, I figured that I could probably fix the <a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> code base
eliminating the <code>Silex</code> library, but keeping most of the JavaScript framework
in place. This resulted in my <a href="https://github.com/TortugaLabs/SupervisorUI-mf">SupervisorUI-mf</a> dashboard.</p>
<p><img src="https://raw.githubusercontent.com/TortugaLabs/SupervisorUI-mf/master/screenshot.png" alt="screenshot" /></p>
<p>This maintains JavaScript framework and simply removes the <code>Silex</code> dependancy.</p>
<p>As before, the <a href="http://scripts.incutio.com/xmlrpc/">Incutio XML-RPC Library</a> was
updated so that it works with PHP8.</p>
<p>In addition I added/fixed the following functionality:</p>
<ul>
<li>Added links to <a href="http://supervisord.org/index.html">supervisor</a> built-in web UI.</li>
<li>Added links to <code>restart</code> and <code>reload config</code> <a href="http://supervisord.org/index.html">supervisor</a> daemons.</li>
<li>Fixed the <code>updateServers</code> functionality, so that services status gets updated
every 30 seconds without having to reload the page. Similarly, starting/stopping
services do not need to reload the page as with <a href="https://github.com/TortugaLabs/supervisorui-redone">supervisorui-redone</a>.</li>
<li>Added the ability to configure port's in the server IP specifications.</li>
</ul>
<p>So the result is an updated <a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> which works at least for me.</p>
<p>Note that I am still keeping <a href="https://github.com/TortugaLabs/supervisorui-redone">supervisorui-redone</a> because it is more
static than <a href="https://github.com/TortugaLabs/SupervisorUI-mf">SupervisorUI-mf</a> which means that <a href="https://github.com/TortugaLabs/supervisorui-redone">supervisorui-redone</a>
works better with <strong>High Latency</strong> (high ping time) connections.</p>
Supervisorui REDONE
urn:uuid:b4115a52-6ed6-5e4b-2b4d-0edcaac97fd1
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Currently I am using docker containers to deploy applications. A number of those
containers make use of <a href="http://supervisord.org/index.html">supervisord</a> for managing processes. While
<a href="http://supervisord.org/index.html">supervisord</a> itself comes with a UI, it is unhandy for me because each
container is its own <a href="http://supervisord.org/index.html">supervisord</a> instance.</p>
<p>So I was interested in some software that would let me manage multiple
<a href="http://supervisord.org/index.html">supervisord</a> instances in a single page. Turns out that there
are several tools lsted <a href="http://supervisord.org/plugins.html#dashboards-and-tools-for-multiple-supervisor-instances">here</a>. Unfortunately none of these
worked for me.</p>
<p>So with the power of open source I rolled my own. I based mine on <a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a>.
Unfortunately, <a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> hasn't been updated in 10 years. What is worse
it depends on a <code>Silex</code> library that doesn't seem to exist anymore.</p>
<p>So I ripped out a bunch of complex functionality and recreated it as
<a href="https://github.com/TortugaLabs/supervisorui-redone">supervisorui-redone</a>.</p>
<p><img src="https://raw.githubusercontent.com/TortugaLabs/supervisorui-redone/main/img/screenshot.png" alt="screenshot" /></p>
<p>It is indeed a quick and dirty implementation, unlike the original
<a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> that has heavier JavaScript dependencies.
<a href="https://github.com/Tabcorp/supervisorui/">supervisorui</a> is mostly a JavaScript applications and simply
uses <code>php</code> for backend access to the <a href="http://supervisord.org/index.html">supervisord</a>. I assume
that it would make it more interactive.</p>
<p>My re-worked version is basically a PHP application.</p>
<p>As such, I removed the dependancies on</p>
<ul>
<li><a href="http://silex.sensiolabs.org/">Silex</a></li>
<li><a href="http://twitter.github.com/bootstrap/">Twitter Bootstrap</a> javascript, only
CSS is in use.</li>
<li><a href="http://jquery.com/">jQuery</a></li>
<li><a href="http://documentcloud.github.com/backbone/">Backbone.js</a></li>
</ul>
<p>Also fixed <a href="http://scripts.incutio.com/xmlrpc/">Incutio XML-RPC Library</a>
so that it works with PHP8.</p>
voidlinux virtualization
urn:uuid:d473bb2f-5b8e-a4e6-d77f-1229d97eb0c7
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This recipe is for setting virtualization on a voidlinux desktop.</p>
<p>Use this <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/void-kvm/setup.sh">setup script</a> to set things up on void linux.</p>
<h2 id="Connecting+to+libvirtd" name="Connecting+to+libvirtd">Connecting to libvirtd</h2>
<p>Note that <code>virsh</code> and <code>virt-manager</code> commands connect to different <code>libvirtd</code>
sessions by defauult.</p>
<p><code>virsh</code> defaults to <code>qemu:///session</code> while <code>virt-manager</code> to <code>qemu:///system</code>.</p>
<p>It is better to use <code>qemu:///system</code> as <code>qemu:///session</code> does not seem to see
all available resources.</p>
<p>To force <code>virsh</code> to connect to the right session you can use commands such as:</p>
<pre><code class="language-bash">virsh --connect qemu:///system net-list
virsh --connect qemu:///system pool-list</code></pre>
<h2 id="Create+network" name="Create+network">Create network</h2>
<p>Set-up a bridge for internal networking using <code>NetworkManager</code>. See this <a href="https://www.happyassassin.net/posts/2014/07/23/bridged-networking-for-libvirt-with-networkmanager-2014-fedora-21/">article</a>
for reference.</p>
<p>Go to <code>NetworkManager</code> menu and use the <code>Edit network connections</code> applet:</p>
<ul>
<li>add new bridge connection</li>
<li>give a suitable name</li>
<li>disable IPv4 and IPv6</li>
<li>everything can be left as default.</li>
</ul>
<h2 id="Import+images" name="Import+images">Import images</h2>
<p>This needs to be done on CLI (as the GUI doesn't seem to allow this)</p>
<pre><code class="language-bash">virsh --connect qemu:///system vol-create-as $pool $vol 32k --format $format
virsh --connect qemu:///system vol-upload $vol $file</code></pre>
<p>For <code>format</code>, use <code>raw</code> for iso, <code>qcow2</code> for actual drives.</p>
<h2 id="Setup+VM" name="Setup+VM">Setup VM</h2>
<p>Just create VM as normal.</p>
<p><code>virt-manager</code> create Vm wizard</p>
<ol>
<li>manual install</li>
<li>alpine linux</li>
<li>mem: depends</li>
<li>Create storage (4GB is enough)</li>
<li>name. Customize configuration. Network use NAT.</li>
<li>Add CDROM, make readonly and shareable</li>
<li>Add Network connected to internal bridge.</li>
<li>Add boot device.</li>
<li>Add shared filesystem:</li>
</ol>
<ul>
<li>driver: virtio-9p</li>
<li>source path: <code>/var/lib/libvirt/filesystems/shared</code></li>
<li>target path: <code>/shared</code></li>
</ul>
<p>Prepare system</p>
<ol>
<li>create filesystem</li>
</ol>
<ul>
<li>mkfs.vfat /dev/vda</li>
<li>apk add syslinux</li>
<li>syslinux /dev/vda</li>
</ul>
<ol start="2">
<li>copy media</li>
</ol>
<ul>
<li>mount -t vfat /dev/vda /mnt</li>
<li>cp -av /media/cdrom/. /mnt</li>
<li>edit /mnt/boot/syslinux/syslinux.cfg
add: dom0_mem=1024M</li>
<li>umount /mnt</li>
</ul>
<ol start="3">
<li>remove cdrom</li>
</ol>
<ul>
<li>power off</li>
<li>remove cdrom</li>
<li>change boot options</li>
</ul>
<ol start="4">
<li>re-start system</li>
<li>setup-alpine</li>
</ol>
<ul>
<li>enter fqdn</li>
<li>set interface eth0 to dhcp</li>
</ul>
<ol start="6">
<li>Mount shared fs:</li>
</ol>
<ul>
<li>mount -t 9p -o trans=virtio /shared /shared</li>
<li>fstab</li>
<li>/sharepoint /share 9p trans=virtio,version=9p2000.L,rw 0 0</li>
</ul>
<h2 id="Thin+provisioning" name="Thin+provisioning">Thin provisioning</h2>
<p>Create an overlay file like so:</p>
<pre><code class="language-bash">qemu-img create -b ubuntu-20.04-server-cloudimg-amd64-disk-kvm.img -F qcow2 -f qcow2 guest-1.qcow2</code></pre>
A couple of useful sites for development
urn:uuid:1e8d2471-41de-9fa3-9d7b-172685605312
2024-03-05T00:00:00+01:00
Alejandro Liu
<h2 id="Unicode" name="Unicode"><a href="https://www.compart.com/en/unicode/">Unicode</a></h2>
<p>Can be useful for looking up <code>unicode</code> code points. Particularly
useful for looking up accented characters. Another interesting
use is for UI graphics characters.</p>
<h2 id="Unicode+search" name="Unicode+search"><a href="http://xahlee.info/comp/unicode_index.html">Unicode search</a></h2>
<p>Another site to search for unicode characters.</p>
<h2 id="gist-hkan" name="gist-hkan"><a href="https://gist.github.com/hkan/264423ab0ee720efb55e05a0f5f90887">gist-hkan</a></h2>
<p>This can be used to look-up emojis and shortcode for emojis.</p>
<h2 id="color+picker" name="color+picker"><a href="https://www.w3schools.com/colors/colors_picker.asp">color picker</a></h2>
<p>Let you visually select a color so that you can paste
its definition in your HTML or CSS documents</p>
<h2 id="color+names" name="color+names"><a href="https://www.w3schools.com/colors/colors_names.asp">color names</a></h2>
<p>This is a list of "standard" color names.</p>
Keyboard Mouse control
urn:uuid:0e814e32-a136-e603-5a56-1d55073a66b3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This comes in handy when working at a <strong>colo</strong> or someplace where you
don't have a mouse and then find yourself needing to use <em>X11</em>. Press
the following key combo:</p>
<pre><code>Ctrl-Shift-Numlock</code></pre>
<p>Now you can control the mouse pointer using the number pad. The
key bindings are:</p>
<h2 id="Move+the+mouse+pointer" name="Move+the+mouse+pointer">Move the mouse pointer</h2>
<ul>
<li>7, 8, 9 are the up directions</li>
<li>4, 6 are left and right</li>
<li>1, 2, 3 are the down directions</li>
</ul>
<h2 id="To+control+the+mouse+buttons" name="To+control+the+mouse+buttons">To control the mouse buttons</h2>
<ul>
<li><code>/</code> selects the left mouse button</li>
<li><code>*</code> selects the middle mouse button</li>
<li><code>-</code> selects the right mouse button</li>
</ul>
<p>This only selects the mouse button but does not press it. To
actually use the mouse button:</p>
<ul>
<li><code>5</code> mouse click</li>
<li><code>+</code> double mouse click</li>
<li><code>0</code> to press and hold the mouse button (e.g. for dragging)</li>
<li><code>.</code> to release the currently mouse button.</li>
</ul>
<p>For example, to do a mouse-click with the middle button you
first press the <code>*</code> key and then <code>5</code> to click.</p>
flatpak
urn:uuid:5eccd25b-a75e-4650-343a-16e71cb041c3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Flatpak is a utility for software deployment and package management for
Linux. It is advertised as offering a sandbox environment in which users
can run application software in isolation from the rest of the system.
Flatpak was developed as part of the freedesktop.org project and was
originally called xdg-app.</p>
<h2 id="Snap+vs+Flatpak" name="Snap+vs+Flatpak">Snap vs Flatpak</h2>
<p>Snaps and Flatpaks are often compared to each other because they both
make it super easy for Linux users to get the latest versions of
desktop applications. If a Linux user wants to install the latest
version of apps like Slack, Krita or Blender, either tool will work
just fine. There is one fundamental difference between Snaps and
Flatpaks, however. While both are systems for distributing Linux apps,
snap is also a tool to build Linux Distributions.</p>
<p>Flatpak is designed to install and update “apps”; user-facing software
such as video editors, chat programs and more. Your operating system,
however, contains a lot more software than apps. It contains a kernel,
printer drivers, audio subsystems and more. While Flatpak assumes this
software is installed using a traditional package manager, snaps can
install anything. These are some examples.</p>
<ul>
<li>There is current work ongoing to put the entire Linux printing stack
inside of a snap. This has the advantage that printer drivers can be
updated independently from the operating system. Once this work is
complete, every single Ubuntu version will be able to use the latest
printer drivers. Trying to use new printers on old Linux distributions
can be very frustrating, and installing newer printer drivers can be
risky. Having the printing stack in a snap will solve this issue.</li>
<li>A few years ago, Ubuntu drastically changed the system theme. When
the "CommuniTheme" initiative started, they wanted an easy way to
make the latest updates of the theme available to users immediately.
Normally, a system theme is shipped together with the distro, so
users do not get theme updates after the distro releases. For
“CommuniTheme”, however, they fixed this by putting the system
theme inside of a snap. Because of this, users got updates to their
theme every day, instead of every 6 months. This is again not
something Flatpak was built for. Flatpak applications can update
their own theme, but it is not possible to ship the system theme
as a Flatpak. This is because Flatpak was designed for distributing
apps, not building an entire Linux distribution.</li>
<li>Even the Linux kernel, the most fundamental part of a Linux
distribution, can be put in a snap. This is used a lot for IoT
devices such as routers and satellites. The impact of a broken
kernel update is catastrophic if you require a rocket in order
to plug a USB stick into the device. Snaps allow these devices
to safely update their kernel and automatically roll back if
something goes wrong during the process.</li>
</ul>
<p>As a result, it’s possible to build an entire operating system using
only snaps, which is exactly what Ubuntu Core is.</p>
<p>Flatpak was designed to give developers an easy way to bring their
apps directly to users, and it does that job very well. The focused
approach of Flatpak even has a big advantage: it’s a lot easier for
a distribution to integrate with Flatpak because it does a lot less.
The tradeoff is that it only provides app distribution; it doesn’t
solve the issues of distributing entire operating systems. Fedora
Silverblue, for example, creates an immutable desktop operating
system by using Flatpak for app distribution and OSTree
for distributing the OS itself.</p>
<p>Unfortunately, for my choice Linux distro (<a href="https://voidlinux.org/">void linux</a>), only Flatpak is
available.</p>
<h2 id="Install+Flatpak" name="Install+Flatpak">Install Flatpak</h2>
<p>To install Flatpak, run the following:</p>
<pre><code>sudo xbps-install -S flatpak
</code></pre>
<h2 id="After+installation" name="After+installation">After installation</h2>
<p>Once flatpak is installed, you can choose to install software system
wide or on an per-user basis. The default is to install system wide
which requires root priviledge (or <code>sudo</code>). You may also specify
<code>--system</code> if you want to force system wide install.</p>
<p>To manage software on a per-user basis, use the <code>--user</code> option.</p>
<p>The best place to find software for Flatpak is <a href="https://flathub.org">flathub</a>.
You can browse the catalogue in <a href="https://flathub.org">https://flathub.org</a>.</p>
<p>If you want to start installing software you need to first add the remote:</p>
<pre><code>sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
</code></pre>
<p>Or for per-user installation:</p>
<pre><code>flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
</code></pre>
<p>Afterwards you can install software:</p>
<pre><code>sudo flatpak install pkg-id ...
</code></pre>
<p>Or per-user:</p>
<pre><code>flatpak --user install pkg-id ...
</code></pre>
<h2 id="Interesting+flatpaks%3A" name="Interesting+flatpaks%3A">Interesting flatpaks:</h2>
<ul>
<li><a href="https://flathub.org/apps/details/com.spotify.Client">com.spotify.Client</a> : Spotify desktop client</li>
<li><a href="https://flathub.org/apps/details/dev.alextren.Spot">dev.alextren.Spot</a> : Alternative spotify client</li>
<li><a href="https://flathub.org/apps/details/org.chromium.Chromium">org.chromium.Chromium</a> : Open source chrome.</li>
<li><a href="https://flathub.org/apps/details/org.mozilla.firefox">org.mozilla.firefox</a> : Firefox Web browser</li>
<li><a href="https://flathub.org/apps/details/com.valvesoftware.Steam">com.valvesoftware.Steam</a> : Steam Launcher</li>
<li><a href="https://flathub.org/apps/details/com.simplenote.Simplenote">com.simplenote.Simplenote</a> : Simple Note</li>
<li><a href="https://flathub.org/apps/details/com.mojang.Minecraft">com.mojang.Minecraft</a> : Minecraft launcher</li>
</ul>
<h2 id="Basic+commands" name="Basic+commands">Basic commands</h2>
<p>Search:</p>
<pre><code>flatpak search gimp
</code></pre>
<p>Install:</p>
<pre><code>sudo flatpak install flathub org.gimp.GIMP
</code></pre>
<p>Or:</p>
<pre><code>flatpak --user install flathub org.gimp.GIMP
</code></pre>
<p>Note that the current version of flatpak will do a search, so you don't
have to specify the ID.</p>
<p>Running applications:</p>
<pre><code>flatpak run org.gimp.GIMP
</code></pre>
<p>This will automatically determine if the application was installed
system-wide or on per-user basis. You can use the <code>--system</code> or
<code>--user</code> flags to specify which to use if it was installed multiple
times, otherwise the system-wide installation will be used.</p>
<p>Updating:</p>
<pre><code>sudo flatpak update
</code></pre>
<p>Or:</p>
<pre><code>flatpak --user update
</code></pre>
<p>This will update all installed applications.</p>
<p>List installed applications:</p>
<pre><code>flatpak list
</code></pre>
<p>This will list applications and runtimes indicating if it was installed
system wide or per-user. To list only applications:</p>
<pre><code>flatpak list --app
</code></pre>
<p>Removing applications:</p>
<pre><code>sudo flatpak uninstall org.gimp.GIMP
</code></pre>
<p>Or</p>
<pre><code>flatpak --user uninstall org.gimp.GIMP
</code></pre>
<p>Keep in mind that uninstalling applications will not delete files in
your $HOME directory. These are in <code>$HOME/.var/app/$PKGID</code>.</p>
<p>You need to manually delete these files.</p>
<h2 id="Managing+repositories" name="Managing+repositories">Managing repositories</h2>
<p>List remotes:</p>
<pre><code>flatpak remotes</code></pre>
<p>This gives a list of the existing remotes that have been added. The
list indicates whether each remote has been added per-user or
system-wide.</p>
<p>Add a remote:</p>
<pre><code class="language-bash">sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
</code></pre>
<p>Or:</p>
<pre><code class="language-bash">flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
</code></pre>
<p>Remove a remote:</p>
<pre><code>sudo flatpak remote-delete flathub
</code></pre>
<p>Or:</p>
<pre><code>flatpak --user remote-delete flathub
</code></pre>
<h2 id="Troubleshooting" name="Troubleshooting">Troubleshooting</h2>
<p>Flatpak has a few commands that can help you to get things working
again when something goes wrong.</p>
<p>To remove runtimes and extensions that are not used by installed
applications, use:</p>
<pre><code>sudo flatpak uninstall --unused
</code></pre>
<p>Or:</p>
<pre><code>flatpak --user uninstall --unused
</code></pre>
<p>To fix inconsistencies with your local installation, use:</p>
<pre><code>sudo flatpak repair
</code></pre>
<p>Or:</p>
<pre><code>flatpak --user repair
</code></pre>
<p>Flatpak also has a number of commands to manage the portal
permissions of installed apps. To reset all portal permissions
for an app, use flatpak permission-reset:</p>
<pre><code>sudo flatpak permission-reset org.gimp.GIMP
</code></pre>
<p>Or</p>
<pre><code>flatpak --user permission-reset org.gimp.GIMP
</code></pre>
<p>To find out what changes have been made to your Flatpak installation
over time, you can take a look at the logs (since 1.2):</p>
<pre><code>flatpak history
</code></pre>
<p>However, this requires <code>libsystemd</code> which is not used in void linux.</p>
My git release script
urn:uuid:f399e4fa-f3bc-c395-cab8-236d952e6740
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I always had issues remembering how to create releases.
So in order to standardise things, I wrote this script:</p>
<ul>
<li><a href="https://github.com/TortugaLabs/my-gh-tools/blob/main/ghrelease.sh">ghrelease</a></li>
</ul>
<p>So whenever I am ready to release I would then just issue
the command:</p>
<pre><code class="language-bash">./ghrelease vX.Y.Z</code></pre>
<h2 id="Pre-requisistes%3A" name="Pre-requisistes%3A">Pre-requisistes:</h2>
<p>You obviously need <code>git</code>. But also you would need
<a href="https://github.com/cli/cli">github-cli</a>.</p>
<p>Your repository must also be <em>clean</em>, without any
pending commits.</p>
<p>You must be on the <em>default</em> branch (usually <code>main</code> or
<code>master</code>), unless doing a <strong>pre-release</strong>.</p>
<p>Optionally, you may have a <code>wfscripts/checks</code> directory
containing checking scripts.</p>
<h2 id="What+happens+on+release" name="What+happens+on+release">What happens on release</h2>
<ol>
<li>remote <code>--tags</code> will be synchronised with local tags.</li>
<li>if a tag of the same name already exists then, release
will be stopped.</li>
<li>check if we are on the <code>default</code> branch (unless pre-release)</li>
<li>check if there are any uncomitted changes.</li>
<li>If available, <code>wfscripts/checks</code> is run using <code>run-parts</code></li>
<li>Will create release notes based on log entries since the
last release (previous annotated tagged commit)</li>
<li><code>VERSION</code> or <code>version.h</code> is updated and comitted.</li>
<li>A new annotated tag is created.</li>
<li>Commits and tags are push to remote (origin).</li>
<li>Release is created using <code>github-cli</code>.</li>
</ol>
<h2 id="Pre-releases" name="Pre-releases">Pre-releases</h2>
<p>You can create pre-releases. These do not have to be on the
<em>default</em> branch. To do this use the <code>--rc</code> (Release candidate)
option:</p>
<pre><code class="language-bash">./ghrelease vX.Y.Z-rcN</code></pre>
<p>This will create a release in <code>github</code> but tag it as <strong>pre-release</strong>.</p>
<p>After release, you may delete all pre-release candidates:</p>
<pre><code class="language-bash">./ghrelease --purge</code></pre>
Meta Database
urn:uuid:af84b35f-1414-ba08-1a2c-57389ac07d21
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So I was looking for a way to version control database schemas,
but I never found something that worked for me. I found all
these options that never seem to match what I wanted:</p>
<ul>
<li><a href="http://freecode.com/projects/metabase">metabase</a></li>
<li><a href="https://github.com/victorstanciu/dbv">dbv</a></li>
<li><a href="http://deltasql.sourceforge.net/features.php">delta sql</a></li>
<li><a href="http://propelorm.org/">Propel</a></li>
<li><a href="https://phinx.org/">phinx</a></li>
<li><a href="https://www.doctrine-project.org/">doctrine</a></li>
<li><a href="https://redbeanphp.com/index.php">redbeanphp</a></li>
</ul>
<p>At the end, I finally began doing the following.</p>
<p>The first schema release, I would create a file called:</p>
<ul>
<li>init-1.0.sql</li>
</ul>
<p>This would contain all the sql statements needed to initialize
the database. To indicate this is the current version I would
create a symlink to it:</p>
<ul>
<li>init.sql -> init-1.0.sql</li>
</ul>
<p>The next schema update I would create file with the commands
to transform the schema. i.e. additional <code>CREATE DATABASE</code> or
<code>ALTER TABLE</code> etc.</p>
<ul>
<li>upgrade-1.1.sql</li>
</ul>
<p>So actually, the <code>upgrade</code> file would be the working version
used for development. For release then I would create a
new <code>init</code> file. Either doing it manually or by doing:</p>
<pre><code class="language-bash">cat init-1.0.sql upgrade-1.1.sql > init-1.1.sql</code></pre>
<p>To indicate that this is the current version, the symlink would
be updated:</p>
<ul>
<li>init.sql -> init-1.1.sql</li>
</ul>
<h2 id="Final+notes" name="Final+notes">Final notes</h2>
<p>This would end-up with a history of schema changes and scripts that
could be used to upgrade the database schema from any version to
any later version.</p>
lnbin
urn:uuid:51552e6a-26df-0f25-c998-309b5a7d10a9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is my <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/lnbin">lnbin</a> script.</p>
<p>This is a program for managing symlink in a <code>/usr/local/bin</code>
directory. It is similar to stow, lndir, cleanlinks and
others.</p>
<p>The approach used by <em>lnbin</em> is based on Stow, and it is to install
each into its own tree, then use symbolic links to make its bin
directory, so that the command can be in the executable path.</p>
<p>When run, <em>lnbin</em> examines packages in <code>pkgs-dir</code> and the
<code>target</code> directory (see OPTIONS), adding or removing links as
needed.</p>
<h2 id="Sample+usage%3A" name="Sample+usage%3A">Sample usage:</h2>
<h3 id="pkg+installation" name="pkg+installation">pkg installation</h3>
<p>The standard way to use <em>lnbin</em> is:</p>
<ul>
<li>download source package</li>
<li>build and install package</li>
</ul>
<pre><code class="language-bash"># extract archive
tar zxvf archive-x.x.tar.gz
cd archive-x.x
# GNU autoconf
./configure --prefix="/usr/local/pkgs/archive-x.x"
make
# Package installation
make install
# ... or ...
make install DESTDIR=/usr/local/pkgs/archive-x.x</code></pre>
<ul>
<li>update symlinks in /usr/local/bin</li>
</ul>
<pre><code class="language-bash">cd /usr/local/bin
lnbin -v -x ../pkgs</code></pre>
<p>This will add the new links (and also remove/update obsolete/changed links)</p>
<h3 id="Removing+packages" name="Removing+packages">Removing packages</h3>
<pre><code class="language-bash">rm -rf /usr/local/pkgs/archive-x.x
cd /usr/local/bin
lnbin -v -x ../pkgs</code></pre>
<h3 id="Updating+symlinks+%28after+upgrade%29" name="Updating+symlinks+%28after+upgrade%29">Updating symlinks (after upgrade)</h3>
<pre><code>cd /usr/local/bin
lnbin -v -x ../pkgs</code></pre>
<p>This will add new links and/or remove obsolete links</p>
Linux Icons
urn:uuid:a76da165-9f8f-404d-cca6-b764d73770c4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A quick note on how to add icons to menus in a Linux desktop.</p>
<ol>
<li>Create the icon image in: <code>/usr/share/pixmaps</code>.
<ul>
<li>png and svg (and maybe others) are supported.</li>
<li>24x24 seems to be a good size for menus.</li>
</ul></li>
<li>You need to create a <code>.desktop</code> file in <code>/usr/share/applications</code>.
<ul>
<li><a href="https://specifications.freedesktop.org/menu-spec/latest/index.html">Desktop menu specification</a></li>
<li><a href="https://specifications.freedesktop.org/menu-spec/latest/apa.html">Registered categories</a></li>
</ul></li>
</ol>
<p>User desktop files:</p>
<ul>
<li>These are located in:
<ul>
<li><code>$HOME/.local/share/applications</code></li>
</ul></li>
<li>Icons can be found here:
<ul>
<li><code>$HOME/.local/icons</code></li>
<li><strong>I am not sure about this one</strong></li>
</ul></li>
<li>Autostart files:
<ul>
<li><code>$HOME/.config/autostart</code>.</li>
</ul></li>
</ul>
<p>See also: <a href="https://wiki.archlinux.org/title/desktop_entries">archlinux wiki</a></p>
<p>There is a command line utility:</p>
<ul>
<li><a href="https://linux.die.net/man/1/xdg-desktop-menu">xdg-desktop-menu</a></li>
</ul>
nas ops cmd
urn:uuid:52f4adc5-bdea-18d7-6c9a-b733dd637ddb
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is my <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2022/opcmd">op</a> script.</p>
<p>This is stupidly simple script to elevate priviledges in order to
manage NFS shares on my QNAP NAS.</p>
<p>The idea is that NFS shares do <code>squash-root</code> so admin access is
disallowed through NFS. This gives a convenient way to issue
root level commands without using NFS but instead use <code>ssh</code>
(and ssh authentication) to do this, which should provide
stronger security.</p>
<p>This script makes the following assumptions:</p>
<ul>
<li>the user has a <code>ssh-key</code> with admin access on the NAS.</li>
<li>NFS is mounted using <code>autofs</code> and is on the <code>/net</code> virtual folder.</li>
</ul>
<p>The way it works is that it uses the current working directory
when the command is launch. It then resolves any symlink path and
check is the directory is in the <code>/net/</code> virtual folder so the
NFS server is the second component of the path.</p>
Linux stuff
urn:uuid:3257e522-8619-1809-f127-11b1206d1434
2024-03-05T00:00:00+01:00
Alejandro Liu
<h2 id="Sudoers" name="Sudoers">Sudoers</h2>
<p>Since <a href="https://www.sudo.ws/">sudo</a> v1.9, it is possible to use the following
statements:</p>
<ul>
<li><code>#includedir</code></li>
<li><code>@includedir</code></li>
</ul>
<p>This is useful better for adding sudo rules rather than modifying
the <code>/etc/sudoers</code> file.</p>
<p>Make sure that the <code>includedir</code> statement is the <strong>LAST</strong> entry
in <code>/etc/sudoers</code> and the files in the directory:</p>
<ul>
<li>names do not contain <code>.</code> (dots)</li>
<li>ownership to <code>root</code>:<code>root</code> and permission is set to <code>0440</code>.</li>
</ul>
Graphviz markdown extensions
urn:uuid:c55b2186-f38c-68f5-0246-7ea696f37367
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I have enabled several extensions to my <a href="https://github.com/getpelican/pelican">pelican</a> website.</p>
<p>One that I wanted to include was <a href="https://graphviz.org/">graphviz</a>. So, I searched
for one and while I found a few, they somehow, did not work for me.</p>
<p>So I wrote my own: <a href="https://github.com/alejandroliu/0ink.net/blob/master/src/mdx/mdx_graphviz.py">mdx_graphviz</a>.</p>
<p>It is quite straight forward. You just need to create blocks:</p>
<pre><code>dot {
digraph G {
rankdir=LR
Earth [peripheries=2]
Mars
Earth -> Mars
}
}</code></pre>
<p>You can use this <a href="http://magjac.com/graphviz-visual-editor/">Graphviz Visual Editor</a> for a more
interactive approach.</p>
Pelican Test page
urn:uuid:b583a149-ccae-2045-d474-7c779cc99be4
2024-03-05T00:00:00+01:00
Alejandro Liu
<div id="toc"><ul>
<li><a href="#shortcodes">shortcodes</a></li>
<li><a href="#mytags">mytags</a></li>
<li><a href="#Drawings">Drawings</a>
<ul>
<li><a href="#aafigure">aafigure</a></li>
<li><a href="#blockdiag">blockdiag</a></li>
</ul></li>
<li><a href="#mdx_include">mdx_include</a></li>
<li><a href="#GFM+style+check+lists">GFM style check lists</a></li>
<li><a href="#my+mdx+variables">my mdx variables</a></li>
</ul></div>
<p>This page is used for testing some pelican and markdown extensions
I added.</p>
<h2 id="shortcodes" name="shortcodes">shortcodes</h2>
<p>OK, this is awkward... I am not sure if this is needed.</p>
<h2 id="mytags" name="mytags">mytags</h2>
<p>Using <del>del</del> and <ins>ins</ins>.</p>
<p>test <mark>mark</mark> tags. How about E=mc<sup>2</sup> and H<sub>2</sub>O.</p>
<h2 id="Drawings" name="Drawings">Drawings</h2>
<h3 id="aafigure" name="aafigure">aafigure</h3>
<div><svg class="bob" font-family="arial" font-size="14" height="48" width="312" xmlns="http://www.w3.org/2000/svg">
<defs>
<marker id="triangle" markerHeight="10" markerUnits="strokeWidth" markerWidth="10" orient="auto" refX="15" refY="10" viewBox="0 0 50 20">
<path d="M 0 0 L 30 10 L 0 20 z"/>
</marker>
</defs>
<style>
line, path {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle.solid {
fill:black;
}
circle.open {
fill:transparent;
}
tspan.head{
fill: none;
stroke: none;
}
</style>
<path d=" M 4 8 L 8 8 M 4 8 L 4 16 M 8 8 L 16 8 M 8 8 L 16 8 L 24 8 M 16 8 L 24 8 L 32 8 M 24 8 L 32 8 L 40 8 M 32 8 L 40 8 L 48 8 M 40 8 L 48 8 L 56 8 M 48 8 L 56 8 L 64 8 M 56 8 L 64 8 L 72 8 M 64 8 L 72 8 L 80 8 M 72 8 L 80 8 M 84 8 L 80 8 M 84 8 L 84 16 M 116 8 L 120 8 M 116 8 L 116 16 M 120 8 L 128 8 M 120 8 L 128 8 L 136 8 M 128 8 L 136 8 L 144 8 M 136 8 L 144 8 L 152 8 M 144 8 L 152 8 L 160 8 M 152 8 L 160 8 L 168 8 M 160 8 L 168 8 M 172 8 L 168 8 M 172 8 L 172 16 M 204 8 L 208 8 M 204 8 L 204 16 M 208 8 L 216 8 M 208 8 L 216 8 L 224 8 M 216 8 L 224 8 L 232 8 M 224 8 L 232 8 L 240 8 M 232 8 L 240 8 L 248 8 M 240 8 L 248 8 L 256 8 M 248 8 L 256 8 L 264 8 M 256 8 L 264 8 L 272 8 M 264 8 L 272 8 L 280 8 M 272 8 L 280 8 L 288 8 M 280 8 L 288 8 L 296 8 M 288 8 L 296 8 L 304 8 M 296 8 L 304 8 M 308 8 L 304 8 M 308 8 L 308 16 M 4 16 L 4 32 M 4 16 L 4 32 M 88 24 L 96 24 M 88 24 L 96 24 L 104 24 M 96 24 L 104 24 L 112 24 M 104 24 L 112 24 M 176 24 L 184 24 M 176 24 L 184 24 L 192 24 M 184 24 L 192 24 L 200 24 M 192 24 L 200 24 M 308 16 L 308 32 M 308 16 L 308 32 M 4 40 L 4 32 M 4 40 L 8 40 L 16 40 M 8 40 L 16 40 L 24 40 M 16 40 L 24 40 L 32 40 M 24 40 L 32 40 L 40 40 M 32 40 L 40 40 L 48 40 M 40 40 L 48 40 L 56 40 M 48 40 L 56 40 L 64 40 M 56 40 L 64 40 L 72 40 M 64 40 L 72 40 L 80 40 M 72 40 L 80 40 M 84 40 L 84 32 M 84 40 L 80 40 M 116 40 L 116 32 M 116 40 L 120 40 L 128 40 M 120 40 L 128 40 L 136 40 M 128 40 L 136 40 L 144 40 M 136 40 L 144 40 L 152 40 M 144 40 L 152 40 L 160 40 M 152 40 L 160 40 L 168 40 M 160 40 L 168 40 M 172 40 L 172 32 M 172 40 L 168 40 M 204 40 L 204 32 M 204 40 L 208 40 L 216 40 M 208 40 L 216 40 L 224 40 M 216 40 L 224 40 L 232 40 M 224 40 L 232 40 L 240 40 M 232 40 L 240 40 L 248 40 M 240 40 L 248 40 L 256 40 M 248 40 L 256 40 L 264 40 M 256 40 L 264 40 L 272 40 M 264 40 L 272 40 L 280 40 M 272 40 L 280 40 L 288 40 M 280 40 L 288 40 L 296 40 M 288 40 L 296 40 L 304 40 M 296 40 L 304 40 M 308 40 L 308 32 M 308 40 L 304 40" fill="none"/>
<path d="" fill="none" stroke-dasharray="3 3"/>
<text x="9" y="28">
KPN
</text>
<text x="41" y="28">
modem+
</text>
<text x="113" y="28">
+router+
</text>
<text x="201" y="28">
+HOME
</text>
<text x="249" y="28">
NETWORK
</text>
</svg>
</div>
<div><svg class="bob" font-family="arial" font-size="14" height="208" width="168" xmlns="http://www.w3.org/2000/svg">
<defs>
<marker id="triangle" markerHeight="10" markerUnits="strokeWidth" markerWidth="10" orient="auto" refX="15" refY="10" viewBox="0 0 50 20">
<path d="M 0 0 L 30 10 L 0 20 z"/>
</marker>
</defs>
<style>
line, path {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle.solid {
fill:black;
}
circle.open {
fill:transparent;
}
tspan.head{
fill: none;
stroke: none;
}
</style>
<path d=" M 52 8 L 56 8 M 52 8 L 52 16 M 56 8 L 64 8 M 56 8 L 64 8 L 72 8 M 64 8 L 72 8 L 80 8 M 72 8 L 80 8 L 88 8 M 80 8 L 88 8 L 96 8 M 88 8 L 96 8 M 100 8 L 96 8 M 100 8 L 100 16 M 52 16 L 52 32 M 52 16 L 52 32 M 100 16 L 100 32 M 100 16 L 100 32 M 132 16 L 132 32 M 132 16 L 132 32 M 16 40 L 24 40 M 16 40 L 24 40 L 32 40 M 24 40 L 32 40 M 52 40 L 52 32 M 52 40 L 48 40 M 52 40 L 52 48 M 100 40 L 100 32 M 100 40 L 104 40 M 100 40 L 100 48 M 104 40 L 112 40 M 104 40 L 112 40 L 120 40 M 112 40 L 120 40 L 128 40 M 120 40 L 128 40 M 132 36 L 132 32 M 132 44 L 132 48 M 136 40 L 144 40 M 136 40 L 144 40 L 152 40 M 144 40 L 152 40 M 52 48 L 52 64 M 52 48 L 52 64 M 100 48 L 100 64 M 100 48 L 100 64 M 52 72 L 52 64 M 52 72 L 56 72 L 64 72 M 56 72 L 64 72 L 72 72 M 64 72 L 72 72 L 80 72 M 72 72 L 80 72 L 88 72 M 80 72 L 88 72 L 96 72 M 88 72 L 96 72 M 100 72 L 100 64 M 100 72 L 96 72 M 60 104 L 64 104 M 60 104 L 56 112 M 64 104 L 72 104 M 64 104 L 72 104 L 80 104 M 72 104 L 80 104 L 88 104 M 80 104 L 88 104 M 92 104 L 88 104 M 92 104 L 88 112 M 48 128 L 56 112 L 48 128 M 64 120 L 52 120 M 66 124 L 64 128 M 72 120 L 84 120 M 80 128 L 88 112 L 80 128 M 96 120 L 84 120 M 96 120 L 104 120 M 96 120 L 104 120 M 28 136 L 44 136 M 28 136 L 24 144 M 32 136 L 44 136 M 40 144 L 48 128 L 40 144 M 56 144 L 64 128 L 56 144 M 72 144 L 80 128 L 72 144 M 88 136 L 76 136 M 16 160 L 24 144 L 16 160 M 36 152 L 40 144 M 64 160 L 56 144 L 64 160 L 72 144 L 64 160 M 12 168 L 16 168 M 12 168 L 16 160 M 16 168 L 24 168 M 16 168 L 24 168 L 32 168 M 24 168 L 32 168 L 40 168 M 32 168 L 40 168 M 44 168 L 40 168 M 44 168 L 48 176 M 72 176 L 64 160 L 72 176 M 56 192 L 48 176 L 56 192 M 64 192 L 72 176 L 64 192 M 60 200 L 56 192 M 60 200 L 64 192" fill="none"/>
<path d="" fill="none" stroke-dasharray="3 3"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="16" y2="4"/>
<line marker-end="url(#triangle)" x1="32" x2="44" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="32" x2="44" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="40" x2="44" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="152" x2="164" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="152" x2="164" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="160" x2="164" y1="40" y2="40"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="48" y2="76"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="48" y2="76"/>
<line marker-end="url(#triangle)" x1="132" x2="132" y1="64" y2="76"/>
<line marker-end="url(#triangle)" x1="88" x2="92" y1="136" y2="136"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="132" cy="40" r="4"/>
<circle class="open" cx="68" cy="120" r="4"/>
<circle class="open" cx="68" cy="120" r="4"/>
<circle class="open" cx="68" cy="120" r="4"/>
<circle class="solid" cx="36" cy="152" r="4"/>
</svg>
</div>
<h3 id="blockdiag" name="blockdiag">blockdiag</h3>
<p>Block diagram</p>
<p>blockdiag {
A -> B -> C -> D;
A -> E -> F -> G;
}</p>
<h2 id="mdx_include" name="mdx_include">mdx_include</h2>
<pre><code class="language-python">{! nginx_mod_authrequest/auth1.py !}</code></pre>
<h2 id="GFM+style+check+lists" name="GFM+style+check+lists">GFM style check lists</h2>
<ul>
<li><input type="checkbox" disabled > foo</li>
<li><input type="checkbox" disabled checked> bar</li>
<li><input type="checkbox" disabled > baz</li>
</ul>
<h2 id="my+mdx+variables" name="my+mdx+variables">my mdx variables</h2>
<p>We use <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/adhoc-rsync/send-nc">snippets</a> as an example.</p>
<p>How we handle missing ${VARS}.</p>
DVTM
urn:uuid:e6d24416-ee6b-7807-ab2b-cf02e9a1282a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The other day I found <a href="http://www.brain-dump.org/projects/dvtm/">dvtm</a>. Looking at it, it
looks very nice. It appeals to me because I am particularly
fond of text user interfaces.</p>
<p><img src="/images/2021/dvtm-screencast.gif" alt="screencast" /></p>
<p>At the end I choose not to use it because:</p>
<ul>
<li>terminal support was less than 100% useful.</li>
<li>At the end of the day using the mouse is just more convenient.</li>
<li>It is not as ubiquitous as for example <a href="https://www.gnu.org/software/screen/">screen</a>. So it
is easier to just use <a href="https://www.gnu.org/software/screen/">screen</a> that can be set-up much
more easily.</li>
<li>most of the time I am already on a window session, so there is
not that many opportunities to use this.</li>
</ul>
<h2 id="dvtm+Cheat+Sheet" name="dvtm+Cheat+Sheet">dvtm Cheat Sheet</h2>
<p>This is a simple cheat sheet for <a href="http://www.brain-dump.org/projects/dvtm/">dvtm</a>.</p>
<p>This uses the default <code>mod</code> key: <code>C-g</code>.</p>
<table>
<thead>
<tr>
<th>Key Seq</th>
<th>Function</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-g M</td>
<td>Toggle mouse mode</td>
</tr>
<tr>
<td>C-g Enter</td>
<td>Zoom current window to master area</td>
</tr>
<tr>
<td>C-g h</td>
<td>Shrink master area</td>
</tr>
<tr>
<td>C-g l</td>
<td>Enlarged master area</td>
</tr>
<tr>
<td>C-g Spc</td>
<td>Toggle layout (vertical stack, bottom stack, grid, full screen)</td>
</tr>
<tr>
<td>C-g f</td>
<td>Vertical stack</td>
</tr>
<tr>
<td>C-g b</td>
<td>Bottom stack</td>
</tr>
<tr>
<td>C-g g</td>
<td>Grid layout</td>
</tr>
<tr>
<td>C-g m</td>
<td>Full screen</td>
</tr>
<tr>
<td>C-g 0</td>
<td>view all windows</td>
</tr>
<tr>
<td>C-g c</td>
<td>Create window</td>
</tr>
<tr>
<td>C-g j</td>
<td>Focus on next window</td>
</tr>
<tr>
<td>C-g k</td>
<td>Focus on previous window</td>
</tr>
<tr>
<td>C-g m</td>
<td>Minimize window</td>
</tr>
<tr>
<td>C-g s</td>
<td>Toggle status bar</td>
</tr>
<tr>
<td>C-g 1-9</td>
<td>Focus on window</td>
</tr>
<tr>
<td>C-g TAB</td>
<td>Toggle focus (last window)</td>
</tr>
<tr>
<td>C-g q</td>
<td>Quit</td>
</tr>
<tr>
<td>C-g C-l</td>
<td>Redraw</td>
</tr>
<tr>
<td>C-g r</td>
<td>Redraw</td>
</tr>
<tr>
<td>C-g PgUp</td>
<td>Scroll back</td>
</tr>
<tr>
<td>C-g PgDn</td>
<td>Scroll Fwd</td>
</tr>
<tr>
<td>C-g C-g</td>
<td>Send C-g</td>
</tr>
</tbody>
</table>
<hr />
<table>
<thead>
<tr>
<th>Key Seq</th>
<th>Additional functions</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-g C</td>
<td>Create window with current directory</td>
</tr>
<tr>
<td>C-g J</td>
<td>Focus on next window "m"</td>
</tr>
<tr>
<td>C-g K</td>
<td>Focus on prev window ?</td>
</tr>
<tr>
<td>C-g i</td>
<td>Increase # windows in master area</td>
</tr>
<tr>
<td>C-g d</td>
<td>decrease # windows in master area</td>
</tr>
<tr>
<td>C-g s</td>
<td>Toggle status bar position (top or bottom)</td>
</tr>
</tbody>
</table>
<p>Other functions that I don't understand or haven't configured:</p>
<ul>
<li>tagging</li>
<li>copymode</li>
<li>status bar</li>
</ul>
Migration to Pelican
urn:uuid:ddabb242-951d-26ad-6621-0accbd3b15ad
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Finally got fedup with <a href="https://pages.github.com/">github pages</a> and its <a href="https://jekyllrb.com/">jekyll</a>
static site generator. Essentially things would break without
any particular reason and there would be nearly no way to
tell what went wrong. I addition, it was not easy to test
changes before making them public.</p>
<p>So I switched to <a href="https://blog.getpelican.com/">pelican</a>, essentially because it was
a static site generator that is part of the <a href="https://voidlinux.org/">void</a>
software repository.</p>
<p>I don't really like it that much as its documentation is not
very good. But eventually I got it to work.</p>
<p>I was able to get one of their public templates to work and
tweaked to match my preferences.</p>
<p>I also was able to add:</p>
<ul>
<li>automatic tag generation
<ul>
<li>this is done before processing input files. i.e. a script
reads existing content and modifies files as needed with
automatic tags.</li>
</ul></li>
<li>sitemap generator
<ul>
<li>this is done as post-processing stage. A script reads
generated html files and generates the sitemap accordingly.</li>
</ul></li>
</ul>
<p>The most useful feature I found is its that I can preview
changes before commiting them.</p>
Storing secrets in git
urn:uuid:c618eb91-bbcc-c9dd-ae0a-711aa71d5b7a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So I gave into the temptation to store "secret" data into
a <a href="https://git-scm.com/">git</a> repository. Of course, to keep things
safer, I chose to use an encryption tool. So I tested:</p>
<ul>
<li><a href="https://github.com/AGWA/git-crypt">git-crypt</a></li>
<li><a href="https://git-secret.io/">git-secret</a></li>
</ul>
<h2 id="git-secret" name="git-secret">git-secret</h2>
<p>So, I tested <a href="https://git-secret.io/">git-secret</a>. It seems to work but was in my
opinion cumbersome.</p>
<p>Furthermore, the version I tested, which
was the one from the <a href="https://voidlinux.org/">void</a> has a bug whereby adding
files for encryption would update the <code>.gitignore</code> file
but would forget to put an EOL at the end of the file.</p>
<p>The main issue I have is that you need to explicitly issue
the command:</p>
<pre><code>git-secret hide
</code></pre>
<p>To hide files. Alternatively you can include this in your
pre-commit hook, but that brings its own issues along.</p>
<p>Overall it was not the best experience.</p>
<h2 id="git-crypt" name="git-crypt">git-crypt</h2>
<p>At the end I opted for <a href="https://github.com/AGWA/git-crypt">git-crypt</a>, which is more seamless
and requires less user interaction for it to work.</p>
<h3 id="git-crypt+mini+howto" name="git-crypt+mini+howto">git-crypt mini howto</h3>
<p>I installed <a href="https://github.com/AGWA/git-crypt">git-crypt</a> using my distro package installation
command.</p>
<p>Initialize an existing git repo and export encryption key:</p>
<pre><code>cd repo
git-crypt init
git-crypt export /path/to/key
</code></pre>
<p>The exported key now needs to be shared between all the repo
users. For example can be saved into a secret variable in
the CI/CD pipeline system.</p>
<p>Select the files that need to be protected by creating
a <code>.gitattributes</code> file:</p>
<pre><code>secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
secretdir/** filter=git-crypt diff=git-crypt
</code></pre>
<p>Like a <code>.gitignore</code> file, it can match wildcards and should
be checked into the repository. Make sure you don't accidentally
encrypt the <code>.gitattributes</code> file itself (or other git files like
<code>.gitignore</code> or <code>.gitmodules</code>).</p>
<p><strong>NOTE</strong> <em>Make sure your <code>.gitattributes</code> rules are in place before
you add sensitive files, or those files won't be encrypted!</em></p>
<p>After cloning a repository with encrypted files, unlock with
the secret key:</p>
<pre><code>git-crypt unlock /path/to/key
</code></pre>
<p>That's all you need to do - after <a href="https://github.com/AGWA/git-crypt">git-crypt</a> is set up (either
with <code>git-crypt init</code> or <code>git-crypt unlock</code>), you can use git
normally - encryption and decryption happen transparently.</p>
<h3 id="Verifying+that+git-crypt+is+working" name="Verifying+that+git-crypt+is+working">Verifying that git-crypt is working</h3>
<ul>
<li>The simplest way:
<ul>
<li><code>git crypt status</code></li>
</ul></li>
<li>The native way:
<ul>
<li><code>git check-attr -a -- <path></code></li>
</ul></li>
<li>Checking object hashes (these shouldn't match):
<ul>
<li><code>git hash-object <path></code></li>
<li><code>cat <path> | git hash-object --stdin</code></li>
</ul></li>
</ul>
Enable syslog with void
urn:uuid:8a08c4f3-b38a-e598-0376-6b3aa54dead5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In <a href="https://voidlinux.org">void</a> Linux, the default is without logging. Most
cases it is OK for desktop use.</p>
<p>If you want to enable <a href="https://en.wikipedia.org/wiki/Syslog">syslog</a> service in <a href="https://voidlinux.org">void</a>,
you need to install:</p>
<pre><code>socklog-void</code></pre>
<p>Also to let your user have access to the logs, use:</p>
<pre><code>usermod -aG socklog <your-username></code></pre>
<p>Because I like to have just a single directory for everything and use
<code>grep</code>, I do the following:</p>
<pre><code>rm -rf /var/log/socklog/?*
mkdir /var/log/socklog/everything
ln -s socklog/everything/current /var/log/messages.log</code></pre>
<p>Create the file <code>/var/log/socklog/everything/config</code> with these
contents:</p>
<pre><code>+*
u<syslog-server-ip>:514</code></pre>
<p>Enable daemons...</p>
<pre><code>ln -s /etc/sv/socklog-unix /var/service/
ln -s /etc/sv/nanoklogd /var/service/</code></pre>
<p>Reload <code>svlogd</code> (if it was already running)</p>
<pre><code>killall -1 svlogd</code></pre>
<h2 id="Reference%3A" name="Reference%3A">Reference:</h2>
<ul>
<li><a href="https://docs.voidlinux.org/config/services/logging.html">voidlinux logging</a></li>
</ul>
Stupid SSL tricks
urn:uuid:cb8dd9df-6d7a-7c36-0721-a34e5dfa5278
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Some hints and tips foor doing SSL related things:</p>
<h2 id="Netcat+for+SSL" name="Netcat+for+SSL">Netcat for SSL</h2>
<p>This command lets you connect to a SSL server (a-la netcat):</p>
<pre><code>cat request.txt | openssl s_client -connect server:443</code></pre>
<h2 id="Creating+self-signed+certificates" name="Creating+self-signed+certificates">Creating self-signed certificates</h2>
<p>This is a single command to generate a self-signed certificate:</p>
<pre><code>openssl req -new \
-newkey rsa:4096 \
-days 365 \
-nodes -x509 \
-subj "/C=NL/ST=ZH/L=Den Haag/O=HomeBase/CN=$fqdn" \
-keyout $ca_root/$fqdn/$fqdn.key \
-out $ca_root/$fqdn/$fqdn.cer</code></pre>
<p>This is unlike other recipes where you create a <code>csr</code> and <code>key</code>
first and then create the <code>certificate</code>.</p>
<h2 id="Checking+and+verifying+certificates" name="Checking+and+verifying+certificates">Checking and verifying certificates</h2>
<ul>
<li>Check certificate
<ul>
<li><code>openssl x509 -in server.crt -text -noout</code></li>
</ul></li>
<li>Check SSL key and verify consistency
<ul>
<li><code>openssl rsa -in server.key -check</code></li>
</ul></li>
<li>Check CSR and print CSR data
<ul>
<li><code>openssl req -text -noout -verify -in server.csr</code></li>
</ul></li>
<li>Verify that certificate and key matches:
<ul>
<li><code>openssl x509 -noout -modulus -in server.crt| openssl md5</code></li>
<li><code>openssl rsa -noout -modulus -in server.key| openssl md5</code></li>
</ul></li>
<li>Check SSL Certificate expiration date
<ul>
<li><code>openssl x509 -dates -noout -in hydssl.cer</code></li>
</ul></li>
</ul>
<h2 id="Check+SSL+connection" name="Check+SSL+connection">Check SSL connection</h2>
<ul>
<li>Tests connectivity to an HTTPS service:
<ul>
<li><code>openssl s_client -connect <hostname>:443</code></li>
</ul></li>
<li>Prints all certificates in the certificate chain presented by the
SSL service. Useful when troubleshooting missing intermediate CA
certificate issues.
<ul>
<li><code>openssl s_client -connect <hostname>:<port> -showcerts</code></li>
</ul></li>
<li>Forces TLSv1 and DTLSv1.
<ul>
<li><code>openssl s_client -connect <hostname>:<port> -tls1</code></li>
<li><code>openssl s_client -connect <hostname>:<port> -dtls1</code></li>
</ul></li>
<li>Forces a specific cipher. This option is useful in testing enabled
SSL ciphers. Use the <code>openssl ciphers</code> command to see a list of
available ciphers for OpenSSL.
<ul>
<li><code>openssl s_client -connect <hostname>:<port> -cipher DHE-RSA-AES256-SHA</code></li>
</ul></li>
</ul>
<p>For troubleshooting connection and SSL handshake problems, see the
following:</p>
<ul>
<li>If there is a connection problem reaching the domain, the OpenSSL
<code>s_client -connect</code> command waits until a timeout occurs and prints
an error, such as <code>connect: Operation timed out</code>.</li>
<li>If you use the OpenSSL client to connect to a non-SSL service, the
client connects but the SSL handshake doesn't happen. <code>CONNECTED (00000003)</code> prints as soon as a socket
opens, but the client waits until a timeout occurs and prints an
error message, such as <code>44356:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47.1/src/ssl/s23_lib.c:182:.</code></li>
</ul>
<p>After disabling a weak cipher, you can verify if it has been disabled
or not with the following command.</p>
<pre><code>openssl s_client -connect google.com:443 -cipher EXP-RC4-MD5</code></pre>
<h2 id="Check+SSL+certificates+on+a+remote+server%3A" name="Check+SSL+certificates+on+a+remote+server%3A">Check SSL certificates on a remote server:</h2>
<ul>
<li>Check who has issued the SSL certificate:
<ul>
<li><code>echo | openssl s_client -servername howtouselinux.com -connect howtouselinux.com:443 2>/dev/null | openssl x509 -noout -issuer</code></li>
</ul></li>
<li>Check whom the SSL certificate is issued to:
<ul>
<li><code>echo | openssl s_client -servername howtouselinux.com -connect howtouselinux.com:443 2>/dev/null | openssl x509 -noout -subject</code></li>
</ul></li>
<li>Check for what dates the SSL certificate is valid:
<ul>
<li><code>echo | openssl s_client -servername howtouselinux.com -connect howtouselinux.com:443 2>/dev/null | openssl x509 -noout -dates</code></li>
</ul></li>
<li>Show the SHA1 fingerprint of the SSL certificate:
<ul>
<li><code>echo | openssl s_client -servername www.howtouselinux.com -connect www.howtouselinux.com:443 2>/dev/null | openssl x509 -noout -fingerprint</code></li>
</ul></li>
<li>Extract all information from the SSL certificate (decoded)
<ul>
<li><code>echo | openssl s_client -servername www.howtouselinux.com -connect www.howtouselinux.com:443 2>/dev/null | openssl x509 -noout -text</code></li>
</ul></li>
<li>Show the SSL certificate itself (encoded):
<ul>
<li><code>echo | openssl s_client -servername howtouselinux.com -connect howtouselinux.com:443 2>/dev/null | openssl x509</code></li>
</ul></li>
</ul>
<h2 id="Becoming+your+own+CA" name="Becoming+your+own+CA">Becoming your own <code>CA</code></h2>
<p>The easiest way is to use <a href="https://github.com/FiloSottile/mkcert">mkcert</a>. <a href="https://github.com/FiloSottile/mkcert">mkcert</a> is a
command line tool that automates most of the activities related
a CA.</p>
<p>Otherwise, <a href="https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/">this article</a> by Brad Touesnard explains
the process fully.</p>
Linux Post Install tasks
urn:uuid:ea9fa619-7796-4a96-baf4-e3e5f6e6fa11
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>These tips are for <a href="https://voidlinux.org" title="Void Linux">void linux</a> as that is the distro
I am using nowadays.</p>
<ul>
<li>mate tricks: change background from cli
<ul>
<li><code>dconf write /org/mate/desktop/background/picture-filename "'PATH-TO-JPEG'"</code></li>
</ul></li>
<li>web page to check if your browser is html5 compliant:
<ul>
<li><a href="https://www.youtube.com/html5">https://www.youtube.com/html5</a></li>
</ul></li>
</ul>
<h2 id="HotKeys" name="HotKeys">HotKeys</h2>
<ul>
<li>Install <code>xbindkeys</code></li>
<li>Add to startup? <code>$HOME/.xprofile</code></li>
<li>Create default config with <code>xbindkeys -d > $HOME/.xbindkeysrc</code></li>
<li>Lookup key combinations:
<ul>
<li><code>xbindkeys --multikey</code> or</li>
<li><code>xbindkeys --key</code></li>
</ul></li>
<li>update bindings <code>xbindkeys --poll-rc</code></li>
<li><code>rofi</code>
<ul>
<li>A good addition to this is <code>rofi</code>. Which is a dynamic menu.</li>
<li>Create a xbindkey shortcut to run rofi with:</li>
<li><code>rofi -show-icons -modi drun,window -show drun</code></li>
</ul></li>
</ul>
<h2 id="Additional+software" name="Additional+software">Additional software</h2>
<ul>
<li>Additional software (Tested on void linux):
<ul>
<li>Tip: use <code>sed -e 's/#.*$//'</code> to strip comments!</li>
</ul></li>
</ul>
<pre><code>{! void-installation/swlist-extras.txt !}
</code></pre>
My tale of IPv6 blues
urn:uuid:4acd7759-c135-262f-7560-c89cb4f18ede
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>My ISP provider is <a href="https://www.kpn.com" title="KPN">KPN</a>. They recently enabled
IPv6 in my street. I was using before a IPv6 Tunnel Broker,
starting with <a href="https://www.sixxs.net" title="SixXS">SixXS</a> and after they went out,
with <a href="https://tunnelbroker.net" title="Hurricane Electric">Hurricane Electric</a>. So naturally,
I decided to switch to KPN's native IPv6 service.</p>
<p>They provide a /64 prefix, which is reasonable. Would be better
if they provided a /48, but /64 is better than other providers.</p>
<p>So to start using KPN as the IPv6 turned out very easy. Their default
configuration works right out of the box if you have single flat
network.</p>
<p>I used to have a router/FW between the KPN modem and my network,
but at some point I decided to go for a flat network design. With
this, (without having to do anything) once KPN enabled IPv6, all my
equipment that was IPv6 capable started using IPv6. It was like
magic.</p>
<p>I run a number of server systems in my home network, using
<a href="https://alpinelinux.org" title="Alpine Linux">Alpine Linux</a> as its operating system.</p>
<p>For some reason, these servers would be able to use IPv6 at first
(either via static configuration or auto-configuration), but stop
working after a few minutes (often after 65 seconds).</p>
<p>Things worked fine for my <a href="https://voidlinux.org/">void-linux</a> systems. These
use <a href="https://en.wikipedia.org/wiki/NetworkManager">NetworkManager</a> so I guess this helps.</p>
<p>Even googling around I was not able to find a solution. Apparently
doing this would re-enable things:</p>
<pre><code>ip -6 a del $ip6_addr dev $IFACE
ip -6 a add $ip6_addr dev $IFACE</code></pre>
<p>So what I did is, I wrote a small little script that would run
every 45 seconds, and do this:</p>
<pre><code>ip -6 address save dev $IFACE scope global > $savefile
ip -6 address flush dev $IFACE scope global
ip -6 address restore < $savefile 2>&1 | grep -v 'RTNETLINK answers: File exists' || :</code></pre>
<p>Again, I have no idea what is going on.</p>
<p>Eventually I changed my set-up to have something like
this:</p>
<div><svg class="bob" font-family="arial" font-size="14" height="48" width="312" xmlns="http://www.w3.org/2000/svg">
<defs>
<marker id="triangle" markerHeight="10" markerUnits="strokeWidth" markerWidth="10" orient="auto" refX="15" refY="10" viewBox="0 0 50 20">
<path d="M 0 0 L 30 10 L 0 20 z"/>
</marker>
</defs>
<style>
line, path {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle {
stroke: black;
stroke-width: 2;
stroke-opacity: 1;
fill-opacity: 1;
stroke-linecap: round;
stroke-linejoin: miter;
}
circle.solid {
fill:black;
}
circle.open {
fill:transparent;
}
tspan.head{
fill: none;
stroke: none;
}
</style>
<path d=" M 4 8 L 8 8 M 4 8 L 4 16 M 8 8 L 16 8 M 8 8 L 16 8 L 24 8 M 16 8 L 24 8 L 32 8 M 24 8 L 32 8 L 40 8 M 32 8 L 40 8 L 48 8 M 40 8 L 48 8 L 56 8 M 48 8 L 56 8 L 64 8 M 56 8 L 64 8 L 72 8 M 64 8 L 72 8 L 80 8 M 72 8 L 80 8 M 84 8 L 80 8 M 84 8 L 84 16 M 116 8 L 120 8 M 116 8 L 116 16 M 120 8 L 128 8 M 120 8 L 128 8 L 136 8 M 128 8 L 136 8 L 144 8 M 136 8 L 144 8 L 152 8 M 144 8 L 152 8 L 160 8 M 152 8 L 160 8 L 168 8 M 160 8 L 168 8 M 172 8 L 168 8 M 172 8 L 172 16 M 204 8 L 208 8 M 204 8 L 204 16 M 208 8 L 216 8 M 208 8 L 216 8 L 224 8 M 216 8 L 224 8 L 232 8 M 224 8 L 232 8 L 240 8 M 232 8 L 240 8 L 248 8 M 240 8 L 248 8 L 256 8 M 248 8 L 256 8 L 264 8 M 256 8 L 264 8 L 272 8 M 264 8 L 272 8 L 280 8 M 272 8 L 280 8 L 288 8 M 280 8 L 288 8 L 296 8 M 288 8 L 296 8 L 304 8 M 296 8 L 304 8 M 308 8 L 304 8 M 308 8 L 308 16 M 4 16 L 4 32 M 4 16 L 4 32 M 84 16 L 84 32 M 84 16 L 84 32 M 96 24 L 84 24 M 96 24 L 116 24 M 96 24 L 116 24 M 104 24 L 116 24 M 116 16 L 116 32 M 116 16 L 116 32 M 172 16 L 172 32 M 172 16 L 172 32 M 184 24 L 172 24 M 184 24 L 204 24 M 184 24 L 204 24 M 192 24 L 204 24 M 204 16 L 204 32 M 204 16 L 204 32 M 308 16 L 308 32 M 308 16 L 308 32 M 4 40 L 4 32 M 4 40 L 8 40 L 16 40 M 8 40 L 16 40 L 24 40 M 16 40 L 24 40 L 32 40 M 24 40 L 32 40 L 40 40 M 32 40 L 40 40 L 48 40 M 40 40 L 48 40 L 56 40 M 48 40 L 56 40 L 64 40 M 56 40 L 64 40 L 72 40 M 64 40 L 72 40 L 80 40 M 72 40 L 80 40 M 84 40 L 84 32 M 84 40 L 80 40 M 116 40 L 116 32 M 116 40 L 120 40 L 128 40 M 120 40 L 128 40 L 136 40 M 128 40 L 136 40 L 144 40 M 136 40 L 144 40 L 152 40 M 144 40 L 152 40 L 160 40 M 152 40 L 160 40 L 168 40 M 160 40 L 168 40 M 172 40 L 172 32 M 172 40 L 168 40 M 204 40 L 204 32 M 204 40 L 208 40 L 216 40 M 208 40 L 216 40 L 224 40 M 216 40 L 224 40 L 232 40 M 224 40 L 232 40 L 240 40 M 232 40 L 240 40 L 248 40 M 240 40 L 248 40 L 256 40 M 248 40 L 256 40 L 264 40 M 256 40 L 264 40 L 272 40 M 264 40 L 272 40 L 280 40 M 272 40 L 280 40 L 288 40 M 280 40 L 288 40 L 296 40 M 288 40 L 296 40 L 304 40 M 296 40 L 304 40 M 308 40 L 308 32 M 308 40 L 304 40" fill="none"/>
<path d="" fill="none" stroke-dasharray="3 3"/>
<text x="9" y="28">
KPN
</text>
<text x="41" y="28">
modem
</text>
<text x="121" y="28">
router
</text>
<text x="209" y="28">
HOME
</text>
<text x="249" y="28">
NETWORK
</text>
</svg>
</div>
<p>The <code>router</code> in between does Network Address Translation
and Firewalling. The reasons I chose this is:</p>
<ul>
<li>More <em>natural</em> way of handling incoming connections</li>
<li>Makes it possible to switch ISP's easier, down the line.
Alternatively, would make it possible to load-balance between
two ISPs.</li>
<li>Can use <code>iptables</code> for firewally. I recognize that this is
only good for a geek like me though.</li>
</ul>
<p>This causes problems with my IPv6 set-up because
now I have two segments.</p>
<p>The KPN modem, assumes a flat network (with /64). Since
I can't create routes in the KPN modem then the only
option would have been to NAT. However the general
concensus is <strong>NOT</strong> to NAT IPv6. See
<a href="https://blogs.infoblox.com/ipv6-coe/ipv6-nat-you-can-get-it-but-you-may-not-need-or-want-it/">this article</a>
for example.</p>
<p>An alternative would have been to split the /64 into /80
segments. Unfortunately, that doesn't work as a lot of the
software out there assumes that the network part of the IPv6
address is at most 64 bits.</p>
<p>Linux has a feature built-in to the kernel called <code>proxy_ndp</code>.
For <a href="https://vtluug.org/wiki/Proxy_NDP">example</a>,</p>
<p>The problem is that this does not scale well as the proxy address
needs to be statically configured.</p>
<p>There are daemons that claim to proxy NDP for ranges:</p>
<ul>
<li><a href="https://github.com/DanielAdolfsson/ndppd">ndppd</a></li>
<li><a href="https://github.com/setaou/ndp-proxy">ndp-proxy</a></li>
</ul>
<p>These however did not work for me.</p>
<p>So I wrote my own script to manage kernel <code>proxy_ndp</code> entries
myself. Essentially it does the following:</p>
<ul>
<li>listen on <code>ip monitor</code> for IPv6 neighbor messages</li>
<li>add and remove kernel data</li>
</ul>
<p>The whole script can be found <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2021/ipv6-whoes/ndpbr.sh">here</a>.</p>
<p>Another approach is to use <a href="https://thekelleys.org.uk/dnsmasq/doc.html">dnsmasq</a> and can
be found <a href="https://quantum2.xyz/2019/03/08/ndp-proxy-route-ipv6-vpn-addresses/">here</a>.</p>
Replacing emacs...
urn:uuid:e993b178-35eb-a7c4-ecfb-7655e9d45f73
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So after over 30 years of using <a href="https://www.gnu.org/software/emacs/">GNU emacs</a> I have switched
to a more modern options.</p>
<p>So I am using:</p>
<ul>
<li><a href="https://www.geany.org/">Geany</a>: For use in a Window environment (both X11 and MSWIN)</li>
<li><a href="https://micro-editor.github.io/">micro</a>: For command line use</li>
<li><a href="https://en.wikibooks.org/wiki/Learning_the_vi_Editor/BusyBox_vi">vi</a>: For small environments</li>
</ul>
<h2 id="Geany" name="Geany">Geany</h2>
<p><a href="https://www.geany.org/">Geany</a> is a nice programer's text editor. I like that it has syntax
highlighting and has a modern UI. It runs on Windows and Linux.</p>
<p><a href="https://www.geany.org/">Geany</a> has most of the features that I would like without much
need of customization. Most of the customization that I have
done is around the area of getting indentation right and the
use of <code>spaces</code> versus <code>tab</code> when indenting. Specially when
writing <a href="https://www.python.org/">python</a> or <a href="https://yaml.org/">yaml</a> files where the indentation is
important.</p>
<p>A couple of featues that I thought were important but I am not
using much are:</p>
<ul>
<li>macro recording</li>
<li>split screens</li>
</ul>
<p>These features are available in geany as plugins, but I am not
using them.</p>
<h2 id="micro" name="micro">micro</h2>
<p>A <a href="https://micro-editor.github.io/">micro</a> is a modern, intuitive terminal based text editor. It
follows modern key bindings and is fairly customizable.</p>
<p>Actually it works well straight out of the box, but I did some
customizations in order to look similar to <a href="https://www.geany.org/">geany</a>, and
so that it would work well with <a href="https://www.putty.org/">putty</a>.</p>
<p>This is my <a href="https://github.com/alejandroliu/dotfiles/blob/rcm-style/config/micro/bindings.json">bindings</a>.json file:</p>
<pre><code class="language-json">{! https://github.com/alejandroliu/dotfiles/raw/rcm-style/config/micro/bindings.json !}
</code></pre>
<h3 id="Links+about+micro" name="Links+about+micro">Links about micro</h3>
<ul>
<li><a href="https://github.com/zyedidia/micro/blob/master/runtime/help/defaultkeys.md">help default keys</a></li>
<li><a href="https://github.com/zyedidia/micro/blob/master/runtime/help/options.md">help config options</a></li>
<li><a href="https://github.com/zyedidia/micro/blob/master/runtime/help/keybindings.md">hekp key bindings</a></li>
</ul>
Linux HDMI hotplug
urn:uuid:ffa460c3-ba56-9c0f-dcb3-9c5200d0ee62
2024-03-05T00:00:00+01:00
Alejandro Liu
<!--
**UPDATE**: This is Xserver focused. For console solution see <a href="/posts/2023/2023-12-31-console-hotplug.html">Console Hotplug</a> article.
-->
<p>The point of this article is to document I workaround that I came
up with to handle a HDMI KVM switch.</p>
<p>What happens is that if my Linux PC is turned on while the KVM switch
is selecting the other PC, it fails to initialize the display, so
when you switch back to the Linux PC, no display is shown.</p>
<p>The trick for this to work is to the use of <a href="https://wiki.debian.org/udev">udev</a> and <a href="https://xorg-team.pages.debian.net/xorg/howto/use-xrandr.html">xrandr</a>.</p>
<p>We use <a href="https://wiki.debian.org/udev">udev</a> to detect the monitor being plugged in, and we use
<a href="https://xorg-team.pages.debian.net/xorg/howto/use-xrandr.html">xrandr</a> to tell X windows to update the display.</p>
<h2 id="Figuring+out+udev" name="Figuring+out+udev">Figuring out udev</h2>
<p>First in the agenda is to figure out what kind of event we should
be looking at. For that, we use the command:</p>
<pre><code>udevadm monitor</code></pre>
<p>or</p>
<pre><code>udevadm monitor --property</code></pre>
<p>With that we can determine what kind of <a href="https://wiki.debian.org/udev">udev</a> events to look
for (if any).</p>
<p>Next we need to figure out what keys we need to match. Unfortunately
there is some guess work required as you need to figure out the <code>/dev</code>
device path, whereas <code>udevadm monitor</code> shows a <code>/devices/</code> path.</p>
<p>However, you manage, you need to use the following command:</p>
<pre><code>udevadm info --query=all --name=/dev/dri/card0 --attribute-walk</code></pre>
<p>This will show possible attributes in the <a href="https://wiki.debian.org/udev">udev</a> rules key
format.</p>
<p>Once we know the keys to use, we can know create the rules files.</p>
<p>Rules are located in two locations:</p>
<ul>
<li><code>/usr/lib/udev/rules.d/</code> : for system default rules</li>
<li><code>/etc/udev/rules.d/</code> : for local specific rules</li>
</ul>
<p>Essentially, we are waiting for the monitor configuration to change
and when that happens we will run a script. This is accomplish with
the following rules file (99-xwin-hotplug.rules):</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2021/xwin-hotplug/99-xwin-hotplug.rules"></script>
<h2 id="Running+xrandr" name="Running+xrandr">Running xrandr</h2>
<p>The script that is kicked off by <a href="https://wiki.debian.org/udev">udev</a> does the following:</p>
<ol>
<li>Check if <code>Xorg</code> is running.</li>
<li>Assumes <code>DISPLAY</code> is <code>:0.0</code> (only one local display!) and tries to
determine a suitable <code>XAUTHORITY</code> file.</li>
<li>Run <a href="https://xorg-team.pages.debian.net/xorg/howto/use-xrandr.html">xrandr</a> to try to determine what is the <code>connected</code>
display.</li>
<li>Calls <code>xrandr --output "$monitor" --auto</code> to re-configure the display</li>
<li>Run <code>xrefresh</code> for good measure.</li>
</ol>
<p>See script:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2021/xwin-hotplug/xwin-hotplug"></script>
<h2 id="See+Also" name="See+Also">See Also</h2>
<ul>
<li><a href="https://www.thegeekdiary.com/beginners-guide-to-udev-in-linux/">Beginners guide to udev</a></li>
<li><a href="https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux">Tutorial on how to write basic udev rules</a></li>
<li><a href="https://opensource.com/article/18/11/udev">Intro to udev</a></li>
</ul>
Alpine Boot switcher
urn:uuid:9bfed770-fd0c-9186-69d0-6533b60cedff
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I boot from a USB boot drive using UEFI. Because of the UEFI boot,
it just a matter of copying the files from the <a href="https://alpinelinux.org/">alpine</a>
ISO to a USB thumbdrive VFAT32 partition. Partition may be set to
EFI (but this doesn't seem to be required).</p>
<p>Since I would like to switch between different <a href="https://alpinelinux.org/">alpine</a> versions,
I wrote a script to let me have multiple <a href="https://alpinelinux.org/">alpine</a> versions and
switch between them. The boot partition can be kept <code>ro</code> as the script
will automatically remount <code>rw</code>.</p>
<p>In your boot/EFI partition, you need to have these two scripts:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2020/alpine-boot-switcher/select.sh">select.sh</a></li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2020/alpine-boot-switcher/fixup.sh">fixup.sh</a></li>
</ul>
<p>The <code>select.sh</code> script is the main script. <code>fixup.sh</code> is called from
<code>select.sh</code> to tweak the boot command line parameters. I use it to add
<code>dom0_mem=2048M</code> arguments to the boot command line so that <code>xen</code>
reserves memory for guests.</p>
<h2 id="Usage%3A" name="Usage%3A">Usage:</h2>
<h3 id="Install+an+ISO+image" name="Install+an+ISO+image">Install an ISO image</h3>
<p>Download a iso image from the <a href="https://alpinelinux.org/">alpine</a> repository and enter:</p>
<pre><code>sh select.sh --install <iso-file></code></pre>
<p>This will extract the contents of the <code><iso-file></code> and use <code>fixup.sh</code>
to apply any necessary tweaks. The new ISO file will be placed
in a directory named according to the alpine version.</p>
<h3 id="Enabling+a+version" name="Enabling+a+version">Enabling a version</h3>
<pre><code>sh select.sh <directory></code></pre>
<p>Makes the <a href="https://alpinelinux.org/">alpine</a> version in <code><directory></code> the current active
version for boot.</p>
PulseAudio hints and tricks
urn:uuid:baff825d-6255-93ca-656e-61bf0781e15b
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="https://www.freedesktop.org/wiki/Software/PulseAudio/">PulseAudio</a> is nowadays the default sound system in many Linux
distributions. It lets you do a number of useful things.</p>
<p><a href="https://www.freedesktop.org/wiki/Software/PulseAudio/">PulseAudio</a> comes with a handy command line utility <code>pacmd</code> to do a
number of things.</p>
<h2 id="pacmd+commands" name="pacmd+commands"><code>pacmd</code> commands</h2>
<ul>
<li>pacmd exit</li>
<li>pacmd help</li>
<li>pacmd list-(modules|sinks|sources|clients|cards|samples)</li>
<li>pacmd list-(sink-inputs|source-outputs)</li>
<li>pacmd stat</li>
<li>pacmd info</li>
<li>pacmd load-module NAME [ARGS ...]</li>
<li>pacmd unload-module NAME|#N</li>
<li>pacmd describe-module NAME</li>
<li>pacmd set-(sink|source)-volume NAME|#N VOLUME</li>
<li>pacmd set-(sink-input|source-output)-volume #N VOLUME</li>
<li>pacmd set-(sink|source)-mute NAME|#N 1|0</li>
<li>pacmd set-(sink-input|source-output)-mute #N 1|0</li>
<li>pacmd update-(sink|source)-proplist NAME|#N KEY=VALUE</li>
<li>pacmd update-(sink-input|source-output)-proplist #N KEY=VALUE</li>
<li>pacmd set-default-(sink|source) NAME|#N</li>
<li>pacmd kill-(client|sink-input|source-output) #N</li>
<li>pacmd play-sample NAME SINK|#N</li>
<li>pacmd remove-sample NAME</li>
<li>pacmd load-sample NAME FILENAME</li>
<li>pacmd load-sample-lazy NAME FILENAME</li>
<li>pacmd load-sample-dir-lazy PATHNAME</li>
<li>pacmd play-file FILENAME SINK|#N</li>
<li>pacmd dump</li>
<li>pacmd move-(sink-input|source-output) #N SINK|SOURCE</li>
<li>pacmd suspend-(sink|source) NAME|#N 1|0</li>
<li>pacmd suspend 1|0</li>
<li>pacmd set-card-profile CARD PROFILE</li>
<li>pacmd set-(sink|source)-port NAME|#N PORT</li>
<li>pacmd set-port-latency-offset CARD-NAME|CARD-#N PORT OFFSET</li>
<li>pacmd set-log-target TARGET</li>
<li>pacmd set-log-level NUMERIC-LEVEL</li>
<li>pacmd set-log-meta 1|0</li>
<li>pacmd set-log-time 1|0</li>
<li>pacmd set-log-backtrace FRAMES</li>
</ul>
<h2 id="Changing+audio+output+from+the+command+line" name="Changing+audio+output+from+the+command+line">Changing audio output from the command line</h2>
<p>For this I use the <code>pacmd</code> utility and manipulate the <code>sink</code> inputs.
For already running streams, the <code>move-sink-input</code> needs to be used.</p>
<p>I have a PC with a weird configuration and requires me to switch
profiles instead.</p>
<p>Get the current active profile:</p>
<pre><code>pacmd list-cards | grep 'active profile'</code></pre>
<p>Set the active profile:</p>
<pre><code>pacmd set-card-profile #card #profile#</code></pre>
<p>Example commands:</p>
<pre><code>pacmd set-card-profile 0 output:analog-stereo+input:analog-stereo
pacmd set-card-profile 0 output:hdmi-stereo+input:analog-stereo</code></pre>
<p>All that logic is in a script <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2020/pa-hints/patoggle">here</a>
or download from this <a href="https://github.com/alejandroliu/0ink.net/raw/main/snippets/2020/pa-hints/patoggle">link</a>.</p>
<h2 id="MATE+control+crashing+status+icon" name="MATE+control+crashing+status+icon">MATE control crashing status icon</h2>
<p>For some reason the sound control icon in the notification bar gets
lost for me. To make it re-appear use this command:</p>
<pre><code>mate-volume-control-status-icon</code></pre>
Getting the current proxy pac configuration
urn:uuid:83f36211-c0b0-70ba-c27b-71adc6d1030a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is done using <a href="https://www.tcl.tk/">tcl</a> for convenience. If you
do not have it installed you can download <a href="http://freewrap.sourceforge.net/">freewrap</a>
executable and rename <code>freewrap.exe</code> to <code>wish.exe</code> or <code>freewrapTCLSH.exe</code> to
<code>tclsh.exe</code>.</p>
<pre><code>Registry Key : HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\
REG_SZ AutoConfigURL = https://<your url>/proxy.pac
REG_DWORD ProxyEnable = 0</code></pre>
<p>This is the <a href="https://www.tcl.tk/">tcl</a> script:</p>
<pre><code class="language-tcl">package require http
set pacURL [registry get {HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings} AutoConfigURL]
puts $pacURL
set conn [::http::geturl $pacURL]
::http::wait $conn
puts [::http::data $conn]
</code></pre>
Definiton of maturity
urn:uuid:b5a5e56b-495c-ca3d-39de-b5b9dc2aad21
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Maturity is:</p>
<p>The ability to stick with a job until it’s finished.</p>
<p>The ability to do a job without being supervised.</p>
<p>The ability to carry money without spending it.</p>
<p>And the ability to bear an injustice without wanting to get even.</p>
Using XScreenSaver Hacks with mate-screensaver
urn:uuid:c06d6c36-fe1c-d081-8914-bbfbfe5ca5a8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Here we explain how to use <a href="https://www.jwz.org/xscreensaver/">XScreenSaver</a> <strong>EXCELLENT</strong>
screensaver hack collection with the <a href="https://mate-desktop.org/">MATE</a> screensaver
applet.</p>
<ul>
<li>Install <code>xscreensaver</code> and <code>mate-screensaver</code></li>
<li>On my linux distribution this creates the following directories:
<ul>
<li><code>/usr/libexec/xscreensaver</code>: contains the screensaver hacks executables</li>
<li><code>/usr/libexec/mate-screensaver</code> : contains the <code>mate-screensaver</code> executables</li>
<li><code>/usr/share/applications/screensavers</code> : containes the <code>dekstop</code> files</li>
</ul></li>
<li>Create a small script that will call the screensaver hack with the right
arguments. Make sure this script is in the <code>/usr/libexec/mate-screensaver</code>
directory, as the <code>mate-screensaver</code> preferences will not accept any
executables that are not in the right places.</li>
<li>Create a desktop file to call the screensaver hack. Verify that
the <code>Exec</code> property contains the application with the right arguments
and the <code>TryExec</code> only contains a path to the script that you created
in the previous step. The <code>mate-screensaver</code> preferences applet
will test if the file specified in <code>TryExec</code> is indeed executable.</li>
<li>Restart <code>mate-screensaver</code>. I usually logout and log back in.</li>
</ul>
<p>For my computers I use this script:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2020/mate-screensaver-hacks/installer.sh"></script>
<p>This simplifies the full process. Just run the script (you may need to
<code>sudo</code>) with the following options:</p>
<ul>
<li><code>$0 hacks [-e\|-d]</code>
<ul>
<li>shows the list of hacks and its enabled or disabled status.</li>
<li>the <code>-e</code> option will only show enabled hacks.</li>
<li>the <code>-d</code> option will only show disabled hacks.</li>
</ul></li>
<li>`$0 enable [--all|hacks]
<ul>
<li>enable the specified hacks.</li>
<li>Use <code>--all</code> to enable all available hacks (excluding blacklisted hacks)</li>
</ul></li>
<li>`$0 disable [--all|hacks]
<ul>
<li>disable the specified hacks.</li>
<li>Use <code>--all</code> to disable all available hacks</li>
</ul></li>
</ul>
<hr />
<p>If you like <a href="https://www.jwz.org/xscreensaver/">XScreenSaver</a> and would like to see the same software
on Windows, you should read <a href="https://www.jwz.org/xscreensaver/xscreensaver-windows.html">this article</a> from the
<a href="https://www.jwz.org/xscreensaver/">XScreenSaver</a> author.</p>
nginx's auth_request_module howto
urn:uuid:04ee8311-2ecb-ecd0-b7bc-cb5ab29ecc9d
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article tries to supplement the <a href="http://nginx.org/en/">nginx</a> documentations
regarding the <a href="http://nginx.org/en/docs/http/ngx_http_auth_request_module.html">auth_request</a> module
and how to <a href="https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/">configure</a> it. In my opinion, that documentation
is a bit incomplete.</p>
<h2 id="What+is+the+nginx%27s+%5Bauth_request%5D%5Bngx_http_auth_request_module%5D+module" name="What+is+the+nginx%27s+%5Bauth_request%5D%5Bngx_http_auth_request_module%5D+module">What is the nginx's <a href="http://nginx.org/en/docs/http/ngx_http_auth_request_module.html">auth_request</a> module</h2>
<p>The documentation for this module says, it implements client
authorization based on the result of a subrequest.</p>
<p>This means that when you make an HTTP request to a protected URL,
<a href="http://nginx.org/en/">nginx</a> performs an internal subrequest to a defined
authorization URL. If the result of the subrequest is HTTP 2xx,
<a href="http://nginx.org/en/">nginx</a> proxies the original HTTP request to the backend server.
If the result of the subrequest is HTTP 401 or 403, access to the
backend server is denied.</p>
<p>By configuring <a href="http://nginx.org/en/">nginx</a>, you can redirect those 401s or 403s to
a login page where the user is authenticated and then redirected to
the original destination.</p>
<p>The entire authorization subrequest process is then repeated, but
because the user is now authenticated the subrequest returns HTTP 200
and the original HTTP request is proxied to the backend server.</p>
<h2 id="Configuring+%5Bnginx%5D%5Bnginx%5D" name="Configuring+%5Bnginx%5D%5Bnginx%5D">Configuring <a href="http://nginx.org/en/">nginx</a></h2>
<p>In your <a href="http://nginx.org/en/">nginx</a> configuration...</p>
<p>This block configures the web server area that will be protected:</p>
<pre><code class="language-nginx"> location /hello {
error_page 401 = @error401; # Specific login page to use
auth_request /auth; # The sub-request to use
auth_request_set $username $upstream_http_x_username; # Make the sub request data available
auth_request_set $sid $upstream_http_x_session; # send what is needed
proxy_pass http://sample.com:8080/hello; # actual location of protected data
proxy_set_header X-Forwarded-Host $host; # Custom headers with authentication related data
proxy_set_header X-Remote-User $username;
proxy_set_header X-Remote-SID $sid;
}
location @error401 {
return 302 /login/?url=http://$http_host$request_uri;
}
</code></pre>
<ul>
<li><code>error_page 401</code> defines the custom login page to use (if any).
In theory, for a REST API, would be possible to authenticate
using a provided <code>token</code> header which makes the login page
unnecesary.</li>
<li><code>auth_request /auth</code> defines that this location needs authentication
and defines the sub-request location to use.</li>
<li><code>auth_request_set</code> can be used to get data from the subrequest
headers and make it available later (like for instance, to a
backend content server using custom headers.</li>
<li><code>proxy_pass</code> defines the actual backend content server.</li>
<li><code>proxy_set_header</code> defines custom header used to pass information
to the content backend server.</li>
</ul>
<p>This block configures the authentication sub-request server:</p>
<pre><code class="language-nginx"> location = /auth {
proxy_pass http://auth-server.sample.com:8080/auth; # authentication server
proxy_pass_request_body off; # no data is being transferred...
proxy_set_header Content-Length '0';
proxy_set_header Host $host; # Custom headers with authentication related data
proxy_set_header X-Origin-URI $request_uri;
proxy_set_header X-Forwarded-Host $host;
}</code></pre>
<ul>
<li><code>proxy_pass</code> where the sub request should be handled.</li>
<li><code>proxy_pass_request_body off</code> and <code>proxy_set_header Content-Length 0</code> are
used to supress the content body and only sends the headers to the
authentication server.</li>
<li><code>proxy_set_header</code> additional details being send to the sub request.
For example, the <code>X-Origin-URI</code>.</li>
</ul>
<p>This implements the login pager URL</p>
<pre><code class="language-nginx"> # If the user is not logged in, redirect them to login URL
location @error401 {
return 302 https://$host/login/?url=https://$http_host$request_uri;
}</code></pre>
<p>In this example, the login page is on the same reverse proxy, but
it doesn't have to be that way.</p>
<p>The actual login page:</p>
<pre><code class="language-nginx"> location /login/ {
proxy_pass http://auth-server.sample.com:8080/login/; # Where the login happens
proxy_set_header X-My-Real-IP $remote_addr; # Additional parameters to send to login page
proxy_set_header X-My-Real-Port $remote_port;
proxy_set_header X-My-Server-Port $server_port;
}</code></pre>
<ul>
<li><code>proxy_pass</code> points to where the login script runs</li>
<li><code>proxy_set_header</code> can be used to pass additional fields that may
be needed by the login script. To implement for example,
<code>$remote_addr</code> based access rules.</li>
</ul>
<p>So in this particular example, we are referring to a server with
<strong>TWO</strong> locations:</p>
<ol>
<li><code>http://auth-server.sample.com:8080/auth</code> - The sub-request URI which is not visible
outside but handles the sub-request.</li>
<li><code>http://auth-server.sample.com:8080/login/</code> - The login URI which handles the
login conversation.</li>
</ol>
<h2 id="The+Authentication+Server" name="The+Authentication+Server">The Authentication Server</h2>
<p>This is where the <a href="http://nginx.org/en/">nginx</a> documentation falls a bit short, there
is no actual authentication server example to refer to.</p>
<p>In my example, we have a simple authentication workflow. When an
unauthenticated user hits the server, the sub-request is called
and checks (and fails) for a session cookie.</p>
<p>The user is then re-directed to the login page, where the actual
login takes place. If succesful, a session cookie is set and
the user is redirected to the original URL.</p>
<p>This is implemented using the following script:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/nginx_mod_authrequest/auth1.py"></script>
<p>This makes uses of the <a href="https://bottlepy.org/">bottle</a> micro framework.</p>
<p>It implements four routes:</p>
<ol>
<li><code>GET /hello</code>
This is just a demo URL used for testing. Only shows the request headers.</li>
<li><code>GET /login/</code>
This is the login page entry point.</li>
<li><code>POST /login/</code>
This is the handler for the login page.</li>
<li><code>GET /auth</code>
This is the sub-request handler.</li>
</ol>
<p>For the demo, we are not really doing any login handling. You only
need to make the username the same as the password to login. Anything
else is a login failure.</p>
<p>When the user succesfully logs in, we set a cookie. Because the
login URL and the protected resource (<code>/hello</code> URL) are in the
same cookie scope, we can use cookie set by the login page
as the verification token in the sub-request.</p>
<p>Note that the login page can be as simple or as complex as it is
needed. For example, it is possible to implement a <a href="https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language">SAML</a>,
<a href="https://openid.net/connect/">OpenID Connect</a>, or any Single-Sign-On workflow
available.</p>
<p>Alternatively, two factor authentication could be implemented here.
The possibilities are endless.</p>
<p>An interesting use of the <a href="http://nginx.org/en/docs/http/ngx_http_auth_request_module.html">auth_request</a>
module would be to delegate <a href="https://en.wikipedia.org/wiki/Basic_access_authentication">Basic Authentication</a> to a different
server or to even implement authentications not supported by
<a href="http://nginx.org/en/">nginx</a> like for example a simple Token-Bearer header or
<a href="https://en.wikipedia.org/wiki/Digest_access_authentication">Digest authentication</a></p>
<h3 id="basic+authentication" name="basic+authentication">basic authentication</h3>
<p>This may seem silly since <a href="http://nginx.org/en/">nginx</a> supports <a href="https://en.wikipedia.org/wiki/Basic_access_authentication">basic authentication</a>
out of the box. The use case for this is when you have a cluster of nginx
front ends, and you want all of them to authenticate against a central
identity server. Furthermore, since the URI can be passed, a more sophisticated
access control can be implemented. Finally, additional values can be passed
through headers, such as group names, tokens, etc.</p>
<h4 id="nginx+configuration" name="nginx+configuration">nginx configuration</h4>
<p>Protected Resource:</p>
<pre><code class="language-nginx"> location /hello {
auth_request /auth; # The sub-request to use
auth_request_set $username $upstream_http_x_username; # Make the sub request data available
proxy_pass http://sample.com:8080/hello; # actual location of protected data
proxy_set_header X-Forwarded-Host $host; # Custom headers with authentication related data
proxy_set_header X-Remote-User $username;
}</code></pre>
<p><strong>NOTE:</strong> unlike the previous example, we do not need to provide a @error401 page.</p>
<p>Sub-request configuration:</p>
<pre><code class="language-nginx"> location = /auth {
proxy_pass http://auth-server.sample.com:8080/auth; # authentication server
proxy_pass_request_body off; # no data is being transferred...
proxy_set_header Content-Length '0';
proxy_set_header Host $host; # Custom headers with authentication related data
proxy_set_header X-Origin-URI $request_uri;
proxy_set_header X-Forwarded-Host $host;
}</code></pre>
<h4 id="authentication+server" name="authentication+server">authentication server</h4>
<p>The python implementation (again, using <a href="https://bottlepy.org/">bottle</a>):</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/nginx_mod_authrequest/auth2.py"></script>
<p>Like in the previous example, we are not doing any user/password verification. We are only
checking if username and password are matching.</p>
<p>Unlike the previous example, all the authentication is handled by a single route (<code>/auth</code>).
It returns 'WWW-Authenticate' to prompt the user for a password. And if it sees
an <code>Authorization</code> header, it would validate it.</p>
<h3 id="digest+authentication" name="digest+authentication">digest authentication</h3>
<p>This implements <a href="https://en.wikipedia.org/wiki/Digest_access_authentication">digest</a> authentication for <a href="http://nginx.org/en/">nginx</a> using the
<a href="http://nginx.org/en/docs/http/ngx_http_auth_request_module.html">auth request module</a>. The nginx configuration is the
same as in the <a href="https://en.wikipedia.org/wiki/Basic_access_authentication">Basic</a> authentication.</p>
<p>The implentation in python (using <a href="https://bottlepy.org/">bottle</a> framework):</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/nginx_mod_authrequest/auth3.py"></script>
Python Virtual Environments
urn:uuid:bee0a0b1-4182-1d9f-3165-9646f36952cb
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is the least you need to know to get to use a Python virtual
environment.</p>
<h3 id="What+is+a+Virtual+Environment" name="What+is+a+Virtual+Environment">What is a Virtual Environment</h3>
<p>At its core, the main purpose of Python virtual environments is to
create an isolated environment for Python projects. This means that
each project can have its own dependencies, regardless of what
dependencies every other project has.</p>
<p>The great thing about this is that there are no limits to the number
of environments you can have since they're just directories containing
a few scripts.</p>
<h3 id="Pre-requisites" name="Pre-requisites">Pre-requisites</h3>
<p>While <code>venv</code> is part of <code>python3</code>, for <code>python2</code> you need to install
<code>virtualenv</code>.</p>
<ul>
<li>void-linux: <code>python-virtualenv</code></li>
</ul>
<h3 id="Create" name="Create">Create</h3>
<p>To create a new virtual environment:</p>
<h4 id="Python2" name="Python2">Python2</h4>
<pre><code>mkdir folder
virtualenv folder</code></pre>
<p>If you want to inherit system global packages in your virtual
environment use this instead:</p>
<pre><code>mkdir folder
virtualenv --system-site-packages folder</code></pre>
<h4 id="Python3" name="Python3">Python3</h4>
<pre><code>mkdir folder
python3 -m venv folder</code></pre>
<p>If you want to inherit system global packages in your virtual
environment use this instead:</p>
<pre><code>mkdir folder
python3 -m venv --system-site-packages folder</code></pre>
<p>I prefer to use the <code>--system-site-packages</code> option, that way
I can have <code>binary</code> modules using the host's package manager.
This is in order to avoid having a compiler in the host system.</p>
<h3 id="Activate" name="Activate">Activate</h3>
<p>To activate a virtual environment:</p>
<pre><code>. <folder>/bin/activate</code></pre>
<p>Pay attention that we are using <code>.</code> to source the script in
the current interpreter.</p>
<h3 id="De-Activate" name="De-Activate">De-Activate</h3>
<p>To de-activate:</p>
<pre><code>deactivate</code></pre>
<h3 id="Run+from+a+script" name="Run+from+a+script">Run from a script</h3>
<p>To use the virtual environment from a script (i.e. running
as a background daemon) you need to add these to the
beginning of your python script:</p>
<pre><code>activate_this = '/path/to/virtualenv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))</code></pre>
<p>Or from a shell cript:</p>
<pre><code>#!/bin/sh
source name_Env/bin/activate
# virtualenv is now active.
exec python script.py "$@"</code></pre>
<h3 id="References" name="References">References</h3>
<p>For more information see:</p>
<ul>
<li><a href="https://docs.python.org/3/library/venv.html">Python3 venv</a></li>
<li><a href="https://docs.python-guide.org/dev/virtualenvs/">pipenv & virtual environments</a></li>
<li><a href="https://realpython.com/python-virtual-environments-a-primer/">virtual env primer</a></li>
</ul>
Secure erase of disc drives
urn:uuid:c388ce5d-4e05-f17c-9fd0-937b716fb3cc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article is about erasing disc drives securely. Specially for SSD
drives, writing zeros or random data to discs is not good enough and
counterproductive.</p>
<p>One way to do secure erase (for disposal) is to begin with an encrypted
disc. However, after the fact the following options are possible:</p>
<h2 id="ATA+Secure+Erase" name="ATA+Secure+Erase">ATA Secure Erase</h2>
<p>You should use the drive's security erase feature.</p>
<p>Make sure the drive Security is not frozen. If it is, it may help to suspend and resume the computer.</p>
<pre><code>$ sudo hdparm -I /dev/sdX | grep frozen
not frozen </code></pre>
<p>The (filtered) command output means that this drive is ”not frozen” and you can continue.</p>
<p>Set a User Password (this password is cleared too, the exact choice does not matter).</p>
<pre><code>sudo hdparm --user-master u --security-set-pass Eins /dev/sdX</code></pre>
<p>Issue the ATA Secure Erase command</p>
<pre><code>sudo hdparm --user-master u --security-erase Eins /dev/sdX</code></pre>
<p>Notes:</p>
<ul>
<li>/dev/sdX is the SSD as a block device that you want to erase.</li>
<li>Eins is the password chosen in this example.</li>
</ul>
<p>See the ATA Secure Erase article in the <a href="https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase">Linux kernel wiki</a> for
complete instructions including troubleshooting.</p>
<p>If for some reason you need to remove the password use:</p>
<pre><code>sudo hdparm --security-disable Eins</code></pre>
<h2 id="blkdiscard" name="blkdiscard">blkdiscard</h2>
<p><code>util-linux 2.23</code> offers <a href="http://man7.org/linux/man-pages/man8/blkdiscard.8.html">blkdiscard</a> which discards data without
secure-wiping them. This has been tested to work over SATA and mmcblk
but not USB.</p>
<p>An excerpt from the manual page of <code>blkdiscard(8)</code>:</p>
<hr />
<h3 id="NAME" name="NAME">NAME</h3>
<p>blkdiscard - discard sectors on a device</p>
<h3 id="SYNOPSIS" name="SYNOPSIS">SYNOPSIS</h3>
<pre><code>blkdiscard [-o offset] [-l length] [-s] [-v] device</code></pre>
<h3 id="DESCRIPTION" name="DESCRIPTION">DESCRIPTION</h3>
<p><code>blkdiscard</code> is used to discard device sectors. This is useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Unlike
<code>fstrim(8)</code> this command is used directly on the block device.</p>
<p>By default, <code>blkdiscard</code> will discard all blocks on the device. Options
may be used to modify this behavior based on range or size, as explained
below.</p>
<p>The device argument is the pathname of the block device.</p>
<p><strong>WARNING: All data in the discarded region on the device will be lost!</strong></p>
<hr />
<h2 id="Use+TRIM" name="Use+TRIM">Use TRIM</h2>
<p>To enable TRIM:</p>
<pre><code>sudo vi /etc/fstab</code></pre>
<p>Change <code>ext4 errors=remount-ro 0" into "ext4 discard,errors=remount-ro 0</code>.
<strong>(Add discard)</strong></p>
<p>Save and reboot, TRIM should now be enabled.</p>
<p>Check if TRIM is enabled:</p>
<pre><code>sudo dd if=/dev/urandom of=tempfile count=100 bs=512k oflag=direct
sudo hdparm --fibmap tempfile</code></pre>
<p>Use the first begin_LBA address.</p>
<pre><code>hdparm --read-sector [begin_LBA] /dev/sda</code></pre>
<p>Now it should return numbers and characters. Remove the file and sync.</p>
<pre><code>rm tempfile
sync</code></pre>
<p>Now, run the following command again. If it returns zeros TRIM is enabled.</p>
<pre><code>hdparm --read-sector [begin_LBA] /dev/sda</code></pre>
<p>Another option is to use the <a href="http://man7.org/linux/man-pages/man8/fstrim.8.html">fstrim</a> command.</p>
<h2 id="Old+fashioned+writes" name="Old+fashioned+writes">Old fashioned writes</h2>
<p>This is what I used to do for magnetic discs. Note, that this is
discouraged for SSD devices:</p>
<p>First I create some random data to use:</p>
<pre><code>dd if=/dev/urandom of=/var/tmp/random bs=1M count=128</code></pre>
<p>Then we write random data to disc:</p>
<pre><code>(while : ; do dd if=/var/tmp/random bs=4k ; done ) | pv | dd of=/dev/sdX bs=4k</code></pre>
<p>The <code>pv</code> part of the pipe is <strong>optional</strong>.</p>
<p>Afterwards:</p>
<pre><code>dd if=/dev/zero of=/dev/sdX bs=4k</code></pre>
Co-existing GLIBC binaries with Void-Linux MUSL edition
urn:uuid:1ce9acd2-8501-8503-4e5e-89ed1ac4f0d6
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I am running <a href="https://voidlinux.org/">void-linux</a> at home with <code>musl</code> as the standard
C library.</p>
<p>While most things work well, there is a number of programs that
do not and must be using <code>glibc</code> counterparts.</p>
<p>To enable this I followed this guide here: <a href="https://blog.w1r3.net/2017/09/23/live-switching-void-linux-from-glibc-to-musl.html">Live switching Void Linux from glibc to musl</a>.</p>
<p>To set-up:</p>
<pre><code>mkdir -p /glibc
sudo env XBPS_ARCH=x86_64 xbps-install --repository=http://alpha.de.repo.voidlinux.org/current -r /glibc -S base-voidstrap</code></pre>
<ul>
<li><code>sudo</code> : yes, we need root</li>
<li><code>env</code> : Needed because we are using <code>sudo</code>.</li>
<li><code>XBPS_ARCH=x86_64</code> : architecture to use. Since we are using musl,
we point to the glibc version. It should be possible to create
a 32 bit root here.</li>
<li><code>--repository=http://alpha.de.repo.voidlinux.org/current</code> : the repository
to use. Feel free to replace to something closer.</li>
<li><code>-r /glibc</code> : directory tree where the glibc executables will live</li>
<li><code>base-voidstrap</code> : unlike <code>base-system</code>, this meta-package is normally
used for containers.</li>
</ul>
<p>To keep this tree up to date:</p>
<pre><code>sudo env XBPS_ARCH=x86_64 xbps-install --repository=http://alpha.de.repo.voidlinux.org/current -r /glibc -Su</code></pre>
<p>To add software to the tree:</p>
<pre><code>sudo env XBPS_ARCH=x86_64 xbps-install --repository=http://alpha.de.repo.voidlinux.org/current -r /glibc -S pkg</code></pre>
<p>Once this is set-up you need a small program to kick off the <code>glibc</code> executables. I copied this one:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-glibc-in-musl/glibc.c"></script>
<p>To compile and install:</p>
<pre><code>gcc -s -o glibc glibc.c
sudo cp glibc /usr/bin
sudo chown root:root /usr/bin/glibc
sudo chmod +sx /usr/bin/glibc</code></pre>
<p>Then you can just run:</p>
<pre><code>glibc cmd args</code></pre>
<p>The following software I have found doesn't work well using <code>musl</code>:</p>
<ul>
<li>Calibre</li>
<li>building buildroot (Because of compilation of <code>fakeroot</code>)</li>
</ul>
Calculate system availability
urn:uuid:1ff19c62-43b1-21fd-34b5-7e1e36ab87c2
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>To calculate the availability of redundant systems you can
use this formula:</p>
<pre><code>total_avail = 1-(1 - single_avail) ^ (number_of_nodes)</code></pre>
<form action="">
<table>
<tr><td>Nodes:</td><td> <input type="number" id="nodes" name="nodes" min="1" maxlength="4" value="2" onchange="myCalculation();"></td></tr>
<tr><td>Single component availability (%):</td><td><input type="number" id="savail" name="savail" min="0.10" step="any" maxlength="10" value="99.00" onchange="myCalculation();"></td></tr>
<tr><td>Total Availability (%): </td><td><input name="total" id="total" type="number" maxlength="20" min="0" placeholder="00.00" readonly> </td></tr>
</table>
</form>
<script>
function myCalculation() {
var nodes = parseInt(document.getElementById('nodes').value,10);
var sava = parseFloat(document.getElementById('savail').value);
var result = (1-(1-sava/100.0)**(nodes))*100
document.getElementById('total').value = result
}
</script>
Ad-Hoc rsync daemons
urn:uuid:a0575684-65b1-2b27-8ca5-03406821a705
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The other day I needed to copy a bunch of files between to servers
in my home network. Because of the volume I wanted to copy the files
without having to go through <code>ssh</code>'s encryption overhead. So I
figured I could use <code>netcat</code> for the data transport.</p>
<p>To do that I wrote these short scripts.</p>
<h2 id="Remote+scripts" name="Remote+scripts">Remote scripts</h2>
<p>Copy these scripts on the remote server. Make sure they are executable.</p>
<ul>
<li>Remote CLI</li>
</ul>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/adhoc-rsync/recv-nc"></script>
<ul>
<li>Remote Helper script</li>
</ul>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/adhoc-rsync/recv"></script>
<h2 id="Local+scripts" name="Local+scripts">Local scripts</h2>
<p>Copy these scripts on the local server. Make sure they are executable</p>
<ul>
<li>Local CLI</li>
</ul>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/adhoc-rsync/send"></script>
<ul>
<li>Local Helper Script</li>
</ul>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/adhoc-rsync/send-nc"></script>
<h2 id="Usage" name="Usage">Usage</h2>
<p>Usage is fairly straight forward, on the remote server enter the
command:</p>
<pre><code>./recv-nc</code></pre>
<p>This will make remote listen for new client connects. On the local
server, enter the command:</p>
<pre><code>./send -avzr --delete --stats SRC/ remote:DST</code></pre>
<p>Actually, just use whatever <code>rsync</code> options you need to use. The <code>send</code>
script will include the <code>--rsh</code> option to make sure the helper
script gets executed.</p>
<h2 id="Issues" name="Issues">Issues</h2>
<p>Unfortunately the local helper script does not detect that the transfer
has completed. The remote helper script would finish correctly and
exit when the transfer is done.</p>
<p>You can simply press <code>Ctrl+C</code> to quit, or if you want to see any
summary stats, I would kill the running <code>cat</code> command.</p>
<p>Example:</p>
<pre><code>$ bg
[2]+ ./send -avzr --stats SRC localhost:DST &
$ pidof cat
13143
$ kill 13143
$ Terminated
Number of files: 6 (reg: 5, dir: 1)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 5,869 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 202
Total bytes received: 17
sent 202 bytes received 17 bytes 438.00 bytes/sec
total size is 5,869 speedup is 26.80
rsync error: syntax or usage error (code 1) at main.c(1189) [sender=3.1.3]
$</code></pre>
Resizing Virtual Disks with virsh
urn:uuid:9bf4bcdd-c7ac-af7e-96b8-1ceb59afdbaa
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I am currently using <code>libvirt</code> for managing my VMs. For virtual discs
I am using <code>LVM2</code> volumes. On a regular basis I need to resize
these virtual discs, but not that often that I can do this from
memory. This is a short procedure to do this:</p>
<pre><code>ls -l /dev/vgX/lvX # note down the major/minor numbers for later
lvextend -L +50G /dev/vgX/lvX # adding 50GB to this volume
cat /proc/partitions # look up the size (in blocks) using major/minor numbers
virsh blockresize --path /dev/vgX/lvX --size SIZE_FROM_PROC_PARTITIONS vmname</code></pre>
<p>Then on the running system you can do:</p>
<pre><code>cat /proc/partitions # Make sure that size is right
xfs_growfs /mount/point # On-line partition re-size</code></pre>
Z-Wave Associations with With Vera UI
urn:uuid:56e81e8f-984d-57ae-4637-1ef5c4de2cbf
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I couldn't find any to the point documentation on how to do this,
so I am writing this.</p>
<p>The way I understand Z-Wave associations work is that once devices
are in the same Z-Wave network, a device can directly send
a command to another device without intervention of the Hub
or controller.</p>
<p>For this to really work the master device sending commands must
support this functionality. This varies from device to device,
so you must look up the documentation of the master device and
find the supported associations groups. In a nutshell, you look-up
what each association group is for, and then you add to that group
the slave devices that will receive the Z-wave commands from the
master.</p>
<p>So for example, a <code>TKB-Home TZ55D</code> Wall mounted dimmer/switch
with two buttons has the
following association groups:</p>
<ul>
<li>Group 1: Control device using the left button.</li>
<li>Group 2: Control device using the right button.</li>
<li>Group 3: Control device using the right button after a double tap.</li>
<li>Group 4: Control device so that it follows the switch state.</li>
</ul>
<p>So this dimmer/switch has two buttons, the left button is used for
local control. The right button can be used to control devices
associated to groups 2 and 3.</p>
<p>In the Vera UI7 this is done as follows:</p>
<ol>
<li>Go to the <code>Devices</code> list.</li>
<li>Locate the <em>master</em> device in the list and click it <code>settings</code>
button.</li>
<li>Click <code>Device Options</code>.</li>
<li>Under <em>Associations</em>, enter the <code>Group ID</code> from the documentation
as explained above, and click <code>Add group</code>.
You may need to click <code>Back</code> and enter <code>Device Options</code> again
for the group to be visible.</li>
<li>Once the desired <code>Group ID</code> is available, click on the <code>Set</code>
button.</li>
<li>Click the Z-Wave devices checkbox to add them to the group.
Leave the entry field next to the device is to enter sub-device
ids. This is used for multichannel devices. For example, an
RGBW controller may have multiple channels to control the
different color LEDs. Check the <em>slave</em> device documentation
for the valid sub-channel IDs to use.</li>
</ol>
Encrypting FileSystem in Void Linux
urn:uuid:390b790d-9248-c91b-7603-568364aab380
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The point of this recipe is to create a encrypted file sytem
so that when the disc is disposed, it does not need to be
securely erased. This is particularly important for SSD devices
since because of block remapping (for wear levelling) data can't
be overwritten consistently.</p>
<p>The idea is that the boot/root filesystem containing the encryption
keys are stored in a different device as the encrypted file system.</p>
<hr />
<p>Generate a passphrase and save it a safe place for later.</p>
<h2 id="Create+block+devices" name="Create+block+devices">Create block devices</h2>
<pre><code>xbps-install -S lvm2 cryptsetup
cryptsetup luksFormat /dev/xda2
cryptsetup luksOpen /dev/xda2 crypt-pool
</code></pre>
<p>Alternate commands</p>
<pre><code>dd if=/dev/urandom of=/crypto_keyfile.bin bs=1024 count=4
chmod 000 /crypto_keyfile.bin
cryptsetup luksFormat /dev/xda2 /crypto_keyfile.bin
cryptsetup luksOpen --key-file=/crypto_keyfile.bin /dev/xda2 crypt-pool
</code></pre>
<p>Add <code>rd.luks.crypttab=1 rd.luks=1</code> to the kernel command line.</p>
<h2 id="Create+a+decryption+key" name="Create+a+decryption+key">Create a decryption key</h2>
<p>Create the key file in the unencrypted <code>/</code> partition</p>
<pre><code>dd if=/dev/urandom of=/crypto_keyfile.bin bs=1024 count=4
chmod 000 /crypto_keyfile.bin
cryptsetup -v luksAddKey /dev/xda2 /crypto_keyfile.bin</code></pre>
<p>Look up the UUID</p>
<pre><code>blkid /dev/xda2</code></pre>
<p>Create entry in <code>/etc/crypttab</code>:</p>
<pre><code>crypt-pool UUID=xxxxxxxxxxxxxxxx /crypto_keyfile.bin luks</code></pre>
<p>Create <code>/etc/dracut.conf.d/10-crypt.conf</code></p>
<pre><code>install_items+="/etc/crypttab /crypto_keyfile.bin"</code></pre>
<p>Update initrd:</p>
<pre><code>xbps-reconfigure -f linux4.19</code></pre>
<p>Update boot menu entries:</p>
<pre><code>bash /boot/mkmenu.sh</code></pre>
<p>At this point it would be good to save:</p>
<ul>
<li><code>/etc/crypttab</code></li>
<li><code>/crypto_keyfile.bin</code></li>
<li>Optionally, passphrase</li>
</ul>
<hr />
<p>Reboot and make sure that the block device gets created on start-up.</p>
<p>Create your file-system and add it to <code>/etc/fstab</code>.</p>
<pre><code>vgcreate pool /dev/mapper/crypt-pool
lvcreate --name home0 -L 20G pool
</code></pre>
<h2 id="Re-using+an+existing+fs+in+a+new+OS+install" name="Re-using+an+existing+fs+in+a+new+OS+install">Re-using an existing fs in a new OS install</h2>
<p>Usually this procedure would be used on a fresh install when the
root filesystem was destroyed. It requires to have a backup of the
<code>/crypto_keyfile.bin</code> and optionally the <code>/etc/crypttab</code>.</p>
<ol>
<li>Add <code>rd.luks.crypttab=1 rd.luks=1</code> to the kernel command line.</li>
<li>Restore the <code>/crypto_keyfile.bin</code>. Make sure it is in <code>/</code> and
permissions are <code>chmod 000 /crypto_keyfile.bin</code>.</li>
<li>If available, restore the <code>/etc/crypttab</code> otherwise look up the
block device UUID and re-create the <code>/etc/crypttab</code> entry:
<ul>
<li>Look-up the UUID:</li>
<li><code>blkid /dev/xda2</code></li>
<li>Add the entry in <code>/etc/crypttab</code></li>
<li><code>crypt-pool UUID=xxxxxxxxxxxxxxxx /crypto_keyfile.bin luks</code></li>
</ul></li>
<li>Create <code>/etc/dracut.conf.d/10-crypt.conf</code>
<ul>
<li><code>install_items+="/etc/crypttab /crypto_keyfile.bin"</code></li>
</ul></li>
<li>Update initrd:
<ul>
<li><code>xbps-reconfigure -f linux4.19</code></li>
</ul></li>
<li>Update boot menu entries:</li>
</ol>
<ul>
<li><code>bash /boot/mkmenu.sh</code></li>
</ul>
Installing Void Linux
urn:uuid:a4665b65-2ddb-1a72-8951-d6994eef1d8c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I made the switch to <a href="https://voidlinux.org" title="Void Linux">void linux</a>. Except for compatibility
issues around <code>glibc</code>, it works quite well. Most compatibility
I have worked around with a combination of <code>Flatpak</code>s, <code>chroot</code>s and
<code>namespaces</code>.</p>
<p>The high lights of <a href="https://voidlinux.org" title="Void Linux">void linux</a>:</p>
<ul>
<li>musl build - which is very lightweigth</li>
<li>Does not depend on <code>systemd</code></li>
<li>a reasonable selection of software packages</li>
</ul>
<p>I have tweaked the installation on my computers to use UEFI and thus
I am using <a href="http://www.rodsbooks.com/refind/" title="rEFInd bootloader">rEFInd</a> instead of grub. This is because it makes
doing bare metal backups and restore just a simple file copy.</p>
<p>My installation process roughly follows the <a href="https://wiki.voidlinux.org/Installation_on_UEFI,_via_chroot" title="Install void linux on UEFI via chroot">UEFI chroot install</a>.</p>
<p>This process is implemented in a script and can be found here:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/install.sh">install.sh</a></li>
</ul>
<p>Script usage:</p>
<pre><code class="language-markdown"> Usage: installer.sh _/dev/sdx_ _hostname_ [options]
- _sdx_: Block device to install to or
- --image=filepath[:size] to create a virtual disc image
- --imgset=filebase[:size] to create a virtual filesystem image set
- --dir=dirpath to create a directory
- _hostname_: Hostname to use
Options:
- swap=kbs : swap size, defaults computed from /proc/meminfo, uses numfmt to parse values
- glibc : Do a glibc install
- noxwin : do not insall X11 related packages
- nodesktop ; do not install desktop environment
- desktop=mate : Install MATE dekstop environment
- passwd=password : root password (prompt if not specified)
- enc-passwd=encrypted : encrypted root password.
- ovl=tar.gz : tarball containing additional files
- post=script : run a post install script
- pkgs=file : text file containing additional software to install
- bios : create a BIOS boot system (needs syslinux)
- cache=path : use the file path for download cache
- xen : do some xen specific tweaks
- xdm-candy : Enable xdm candy
- noxdm : disable graphical login</code></pre>
<h4 id="Command+line+examples" name="Command+line+examples">Command line examples</h4>
<ul>
<li>sudo sh install.sh --dir=$HOME/vx9 vx9 swap=4G glibc passwd=1234567890 cache=$HOME/void-cache xen</li>
<li>sudo sh install.sh --dir=$HOME/vx1 vx1 swap=4G glibc passwd=1234567890 cache=$HOME/void-cache xen</li>
<li>sudo sh install.sh --dir=$HOME/vx11 vx11 swap=4G passwd=1234567890 cache=$HOME/void-cache xen</li>
</ul>
<h3 id="Initial+set-up" name="Initial+set-up">Initial set-up</h3>
<p>Boot using the void live CD and partition the target disk:</p>
<pre><code class="language-bash">cfdisk -z /dev/xda</code></pre>
<p>Make sure you use <code>gpt</code> label type (for UEFI boot). I am creating
the following partitions:</p>
<ol>
<li>500MB <code>EFI System</code></li>
<li><em>RAM Size </em> 1.5* <code>Linux swap</code>, Mainly used for Hibernate.</li>
<li><em>Rest of drive</em> <code>Linux filesystem</code>, Root file system</li>
</ol>
<p>This is on a USB thumb drive. The data I keep on an internal disk.</p>
<p>Now we create the filesystems:</p>
<pre><code class="language-bash">mkfs.vfat -F 32 -n EFI /dev/xda1
mkswap -L swp0 /dev/xda2
mkfs.xfs -f -L voidlinux /dev/xda3</code></pre>
<p>We're now ready to mount the volumes, making any necessary mount point directories along the way (the sequence is important, yes):</p>
<pre><code class="language-bash">mount /dev/xda3 /mnt
mkdir /mnt/boot
mount /dev/xda1 /mnt/boot</code></pre>
<h3 id="Installing+Void" name="Installing+Void">Installing Void</h3>
<p>So we do a targeted install:</p>
<p>For musl-libc</p>
<pre><code class="language-bash">env XBPS_ARCH=x86_64-musl xbps-install -S -R http://alpha.de.repo.voidlinux.org/current/musl -r /mnt base-system grub-x86_64-efi</code></pre>
<p>For glibc</p>
<pre><code class="language-bash">env XBPS_ARCH=x86_64 xbps-install -S -R http://alpha.de.repo.voidlinux.org/current -r /mnt base-system grub-x86_64-efi</code></pre>
<p>But actually, for the package list I have been using these lists:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/swlist.txt"></script>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/swlist-xwin.txt?footer=minimal"></script>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/swlist-mate.txt?footer=minimal"></script>
<p>This installs a <a href="https://mate-desktop.org/" title="MATE Desktop environment">MATE</a> desktop environment.</p>
<h4 id="Software+selection+notes" name="Software+selection+notes">Software selection notes</h4>
<ul>
<li>For time synchronisation (ntp) we ae choosing <code>chrony</code> as it is
reputed to be more secure that <code>ntpd</code> and more compliant than
<code>openntpd</code>.</li>
<li>We are using the default configuration, which should be OK. Uses
<code>pool.ntp.org</code> for the time server which would use a suitable
default.</li>
<li>For <code>cron</code> we are using <code>dcron</code>. It is full featured (i.e.
compatibnle with <code>cron</code> and it can handle power-off situations,
while being the most light-weight option available.
See: <a href="https://voidlinux.org/faq/#cron">VoidLinux FAQ: Cron</a></li>
<li>Includes <code>autofs</code> and <code>nfs-utils</code> for network filesystems and
automount support.</li>
</ul>
<h3 id="nonfree+software+and+other+repositories" name="nonfree+software+and+other+repositories">nonfree software and other repositories</h3>
<p>Additional repositories are available to support either
non-free software and in the case of glibc, multilib (32 bit)
binaries.</p>
<p>To enable under the musl version:</p>
<pre><code class="language-bash">env XBPS_ARCH="$arch" xbps-install -y -S -R "$voidurl" -r /mnt void-repo-nonfree</code></pre>
<p>For glibc:</p>
<pre><code class="language-bash">env XBPS_ARCH="$arch" xbps-install -y -S -R "$voidurl" -r /mnt void-repo-nonfree void-repo-multilib void-repo-multilib-nonfree</code></pre>
<p>Then you can install non-free software, like:</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/swlist-nonfree.txt"></script>
<h3 id="Enter+the+void+chroot" name="Enter+the+void+chroot">Enter the void chroot</h3>
<p>Upon completion of the install, we set up our chroot jail, and chroot into our mounted filesystem:</p>
<pre><code class="language-bash">mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -o bind /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
cp -L /etc/resolv.conf /mnt/etc/
chroot /mnt bash -il</code></pre>
<p>In order to verify our install, we can have a look at the directory structure:</p>
<pre><code class="language-bash"> ls -la</code></pre>
<p>The output should look something akin to the following:</p>
<pre><code class="language-markdown">total 12
drwxr-xr-x 16 root root 4096 Jan 17 15:27 .
drwxr-xr-x 3 root root 4096 Jan 17 15:16 ..
lrwxrwxrwx 1 root root 7 Jan 17 15:26 bin -> usr/bin
drwxr-xr-x 4 root root 127 Jan 17 15:37 boot
drwxr-xr-x 2 root root 17 Jan 17 15:26 dev
drwxr-xr-x 26 root root 4096 Jan 17 15:27 etc
drwxr-xr-x 2 root root 6 Jan 17 15:26 home
lrwxrwxrwx 1 root root 7 Jan 17 15:26 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Jan 17 15:26 lib32 -> usr/lib32
lrwxrwxrwx 1 root root 7 Jan 17 15:26 lib64 -> usr/lib
drwxr-xr-x 2 root root 6 Jan 17 15:26 media
drwxr-xr-x 2 root root 6 Jan 17 15:26 mnt
drwxr-xr-x 2 root root 6 Jan 17 15:26 opt
drwxr-xr-x 2 root root 6 Jan 17 15:26 proc
drwxr-x--- 2 root root 26 Jan 17 15:39 root
drwxr-xr-x 3 root root 17 Jan 17 15:26 run
lrwxrwxrwx 1 root root 8 Jan 17 15:26 sbin -> usr/sbin
drwxr-xr-x 2 root root 6 Jan 17 15:26 sys
drwxrwxrwt 2 root root 6 Jan 17 15:15 tmp
drwxr-xr-x 11 root root 123 Jan 17 15:26 usr
drwxr-xr-x 11 root root 150 Jan 17 15:26 var</code></pre>
<p>While chrooted, we create the password for the root user, and set root access permissions:</p>
<pre><code class="language-bash"> passwd root
chown root:root /
chmod 755 /</code></pre>
<p>Since I am a <code>bash</code> convert, I would do this:</p>
<pre><code class="language-bash"> xbps-alternatives --set bash</code></pre>
<p>Create the <code>hostname</code> for the new install:</p>
<pre><code class="language-bash">echo <HOSTNAME> > /etc/hostname</code></pre>
<p>Edit our <code>/etc/rc.conf</code> file, like so:</p>
<pre><code class="language-bash">HOSTNAME="<HOSTNAME>"
# Set RTC to UTC or localtime.
HARDWARECLOCK="UTC"
# Set timezone, availables timezones at /usr/share/zoneinfo.
TIMEZONE="Europe/Amsterdam"
# Keymap to load, see loadkeys(8).
KEYMAP="us-acentos"
# Console font to load, see setfont(8).
#FONT="lat9w-16"
# Console map to load, see setfont(8).
#FONT_MAP=
# Font unimap to load, see setfont(8).
#FONT_UNIMAP=
# Kernel modules to load, delimited by blanks.
#MODULES=""</code></pre>
<p>Also, modify the <code>/etc/fstab</code>:</p>
<pre><code class="language-markdown">#
# See fstab(5).
# <file system> <dir> <type> <options> <dump> <pass>
tmpfs /tmp tmpfs defaults,nosuid,nodev 0 0
LABEL=EFI /boot vfat rw,fmask=0133,dmask=0022,noatime,discard 0 2
LABEL=voidlinux / xfs rw,relatime,discard 0 1
LABEL=swp0 swap swap defaults 0 0</code></pre>
<p>For a removable drive I include the line:</p>
<pre><code class="language-markdown">LABEL=volume /media/blahblah xfs rw,relatime,nofail 0 0</code></pre>
<p>The important setting here is <strong>nofail</strong>. When the drive is
available it gets mounted. If not, the <strong>nofail</strong> prevents
this to cause the boot sequence to stop.</p>
<p>If using <code>glibc</code> you can modify <code>/etc/default/libc-locales</code> and
uncomment:</p>
<p><code>en_US.UTF-8 UTF-8</code></p>
<p>Or whatever locale you want to use. And run:</p>
<pre><code class="language-bash"> xbps-reconfigure -f glibc-locales</code></pre>
<h3 id="Set-up+UEFI+boot" name="Set-up+UEFI+boot">Set-up UEFI boot</h3>
<p>Download the <a href="http://www.rodsbooks.com/refind/" title="rEFInd bootloader">rEFInd</a> zip binary from:</p>
<ul>
<li><a href="http://www.rodsbooks.com/refind/getting.html" title="rEFInd download page">rEFInd download</a></li>
</ul>
<p>Set-up the boot partition:</p>
<pre><code class="language-bash">mkdir /boot/EFI
mkdir /boot/EFI/BOOT</code></pre>
<p>Copy from the <code>zip file</code> the file <code>refind-bin-{version}/refind/refind_x64.efi</code> to
<code>/boot/EFI/BOOT/BOOTX64.EFI</code>.</p>
<p>The version I am using right now can be found here: <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/BOOTX64.EFI">v0.11.4 BOOTX64.EFI</a></p>
<p>Create kernel options files <code>/boot/cmdline</code>:</p>
<pre><code class="language-bash">root=LABEL=voidlinux ro quiet
</code></pre>
<p>For my hardware I had to add the option:</p>
<ul>
<li><code>intel_iommu=igfx_off</code>
<ul>
<li>To work around some strange bug.</li>
</ul></li>
<li><code>i915.enable_ips=0</code>
<ul>
<li>fixes a power saving mode problem on 4.1-rc6+</li>
</ul></li>
</ul>
<p>Create the following script as <code>/boot/mkmenu.sh</code></p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/mkmenu.sh"></script>
<p>Add the following scripts to:</p>
<ul>
<li><code>/etc/kernel.d/post-install/99-refind</code></li>
<li><code>/etc/kernel.d/post-remove/99-refind</code></li>
</ul>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/hook.sh"></script>
<p>Make sure they are executable. This is supposed to re-create
menu entries whenever the kernel gets upgraded.</p>
<p>We need to have a look at <code>/lib/modules</code> to get our Linux kernel version</p>
<pre><code class="language-bash">ls -la /lib/modules</code></pre>
<p>Which should return something akin to:</p>
<pre><code class="language-markdown">drwxr-xr-x 3 root root 21 Jan 31 15:22 .
drwxr-xr-x 23 root root 8192 Jan 31 15:22 ..
drwxr-xr-x 3 root root 4096 Jan 31 15:22 5.2.13_1</code></pre>
<p>And this script to create boot files:</p>
<pre><code class="language-bash">xbps-reconfigure -f linux5.2</code></pre>
<p>If you need to manually prepare boot files:</p>
<pre><code class="language-bash"># update dracut
dracut --force --kver 4.19.4_1
# update refind menu
bash /boot/mkmenu.sh</code></pre>
<p>We are now ready to boot into <a href="https://voidlinux.org" title="Void Linux">Void</a>.</p>
<pre><code class="language-bash">exit
umount -R /mnt
reboot</code></pre>
<h3 id="Post+install" name="Post+install">Post install</h3>
<p>After the first boot, we need to activate services:</p>
<p>Command line set-up:</p>
<pre><code class="language-bash">ln -s /etc/sv/dhcpcd /var/service
ln -s /etc/sv/sshd /var/service
ln -s /etc/sv/{acpid,chronyd,cgmanager,crond,uuidd,statd,rcpbind,autofs} /var/service</code></pre>
<p>Full workstation set-up:</p>
<pre><code class="language-bash">ln -s /etc/sv/dbus /var/service
ln -s /etc/sv/NetworkManager /var/service
ln -s /etc/sv/sshd /var/service
ln -s /etc/sv/{acpid,chronyd,cgmanager,crond,uuidd,statd,rcpbind,autofs} /var/service
ln -s /etc/sv/{consolekit,xdm} /var/service</code></pre>
<p>Creating new users:</p>
<pre><code class="language-bash">useradd -m -s /bin/bash -U -G wheel,users,audio,video,cdrom,input newuser
passwd newuser</code></pre>
<p>Note: The <code>wheel</code> user group allows the user to escalate to root.</p>
<p>Configure sudo:</p>
<pre><code class="language-bash">visudo</code></pre>
<p>Uncomment:</p>
<pre><code class="language-bash"># %wheel ALL=(ALL) ALL</code></pre>
<h3 id="Configure+keyboard" name="Configure+keyboard">Configure keyboard</h3>
<p>Create configuration file: <code>/etc/X11/xorg.conf.d/30-keyboard.conf</code></p>
<pre><code class="language-xorg.conf">Section "InputClass"
Identifier "keyboard-all"
Option "XkbLayout" "us"
# Option "XkbModel" "pc105"
# Option "XkbVariant" "altgr-intl"
Option "XkbVariant" "intl"
# MatchIsKeyboard "on"
EndSection</code></pre>
<p>This makes the <code>intl</code> for the <code>XkbVariant</code> the system-wide default.</p>
<p>Since, as a programmer I prefer the <code>altgr-intl</code> variant, then I
run this in my de desktop environment startup to override the
default:</p>
<pre><code class="language-bash">setxkbmap -rules evdev -model evdev -layout us -variant altgr-intl
</code></pre>
<h3 id="Using+xdm" name="Using+xdm">Using xdm</h3>
<p>I have switched to <a href="https://en.wikipedia.org/wiki/XDM_(display_manager)">xdm</a> as my display manager. This is
configured in <code>/etc/X11/xdm/xdm-config</code>.</p>
<p>Specifically, I update the Xsession setting to be the following:</p>
<pre><code>! DisplayManager*session: /usr/lib64/X11/xdm/Xsession
DisplayManager*session: /etc/X11/Xsession
</code></pre>
<p>And have a custom <a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/Xsession">Xsession</a> script in
<code>/etc/X11/Xsession</code>.</p>
<p>Particularly important is the fact that the default Xsession
script is not able to start a <code>mate</code> or <code>xfce4</code> sessions
until you add the command:</p>
<pre><code class="language-bash">xhost +local:
</code></pre>
<p>Apparently there is somewhat of an issue in the way <code>xauth</code>
is handled.</p>
<p><strong>NOTE:</strong> <em>Doing <code>xhost +local:</code> is hardly a best practice
when it comes to security.</em></p>
<h4 id="Spicing+up+XDM" name="Spicing+up+XDM">Spicing up XDM</h4>
<p>Allthough <a href="https://en.wikipedia.org/wiki/XDM_(display_manager)">xdm</a> is fairly old-school, there are
still some opportunities to add some eye-candy to
it. For that, we change the <code>setup</code> and <code>startup</code> scripts
<code>Xsetup_0</code> and <code>GiveConsole</code> into custom scripts:</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/xdm/Xsetup_0">Xsetup_0</a></li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/xdm/GiveConsole">GiveConsole</a></li>
</ul>
<p>Unfortunately, it only works for applications that draw
directly to the root window as it is not possible to control
overlapping windows. For example, running <code>cmatrix</code> on
a <code>xterm</code> window covers the login widget.</p>
<p>On the other hand, the <a href="https://www.jwz.org/xscreensaver/">xscreensaver</a> collection of screen
hacks seem to accept the <code>-root</code> parameter, which can be used
to kick off the hack, drawing on the root window.</p>
<h3 id="NOT+using+a+display+manager" name="NOT+using+a+display+manager"><em>NOT</em> using a display manager</h3>
<p>If you do not want to run a display manager, you can simply
start your session from the Linux console and use <code>startx</code> and
<code>xinitrc</code> combination.</p>
<p>Alternatively, you can add a file in <code>/etc/profile.d</code> to start X
at login if on tty1.</p>
<ul>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/Xsession">session</a></li>
<li><a href="https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/noxdm/zzdm.sh">zzdm.sh</a></li>
</ul>
<p>I am using the <code>session</code> script, which is a modified version of
the earlier <code>Xsession</code> script that I am using for <code>xdm</code> to
launch a desktop session.</p>
<p>The script <code>zzdm.sh</code> is used to <code>startx</code> on login.</p>
<h3 id="Tweaks+and+Bug-fixes" name="Tweaks+and+Bug-fixes">Tweaks and Bug-fixes</h3>
<h4 id="%2Fetc%2Fmachine-id+or+%2Fvar%2Flib%2Fdbus%2Fmachine-id" name="%2Fetc%2Fmachine-id+or+%2Fvar%2Flib%2Fdbus%2Fmachine-id">/etc/machine-id or /var/lib/dbus/machine-id</h4>
<p>Because we don't use <code>systemd</code>, we need to create <code>/etc/machine-id</code>
and <code>/var/lib/dbus/machine-id</code>.
manually. This is only needed for desktop systems.</p>
<p>See [this article][machineid] for more
info.</p>
<pre><code class="language-bash"> dbus-uuidgen | tee /etc/machine-id /var/lib/dbus/machine-id</code></pre>
<h4 id="power+button+handling" name="power+button+handling">power button handling</h4>
<p>This patch prevents the <code>/etc/acpi/handler.sh</code> to handle the power button
instead, letting the Desktop Environment handle the event.</p>
<p>It does it by checking if a X session is running. In the
<code>/etc/rc.local</code> script, we create a file called
<code>/run/xsession.pid</code> which is made writeable by all.
The system is configured so that <code>xdm</code> or <code>/etc/profile/zzdm.sh</code>
(when login as normal user on <code>tty1</code>) will start an X session
and will use the scripts <code>/etc/X11/xinit/session</code> or
<code>/etc/X11/xdm/Xsession</code> to start the session.
From these scripts, the current X session information is saved
to <code>/run/xsession.pid</code>.</p>
<p>When <code>/etc/acpi/handler.sh</code> starts, it will check
<code>/run/session.pid</code> if it contains a running session. It will
also check if a Desktop Environment power manager
(in this case <code>mate-power-manager</code>) is running. If it is, then
it will exit.</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/acpi-handler.patch"></script>
<h4 id="rtkit+spamming+logs" name="rtkit+spamming+logs">rtkit spamming logs</h4>
<p>Apparently, <code>rtkit</code> requres an <code>rtkit</code> user to exist. Otherwise it
will spam the logs with error messages. To correct use this command:</p>
<pre><code class="language-bash">useradd -r -s /sbin/nologin rtkit</code></pre>
<h4 id="xen+tweaks" name="xen+tweaks">xen tweaks</h4>
<p>For xen we need to make some adjustments...</p>
<ol>
<li>Tweak block device references.
<ul>
<li><code>/etc/fstab</code> : mount xvda and other devices</li>
<li><code>/boot/cmdline</code> : get the right xvda root device</li>
</ul></li>
<li>Enable disable services
<ul>
<li>Disable: <code>slim</code>, <code>agetty-ttyX</code></li>
<li>Enable: <code>agetty-hvc0</code></li>
<li>Decide if you want to use <code>NetworkManager</code> or <code>dhcpcd</code>.</li>
</ul></li>
</ol>
<p>Normally, I would create a tarball image to transfer over, in order
for the image to work properly you need to save <code>capabilities</code>.</p>
<h3 id="Old+Notes" name="Old+Notes">Old Notes</h3>
<h4 id="PolKit+rule+tweaks" name="PolKit+rule+tweaks">PolKit rule tweaks</h4>
<p>Testing as of 2019-09-07, the following does not seem to be needed
any longer. I left it here just for reference (in case it breaks
again.</p>
<hr />
<p>OK, in my case, <code>shutdown</code>, <code>reboot</code> and local media access functions
were not available using the <a href="https://mate-desktop.org/" title="MATE Desktop environment">MATE</a> desktop.</p>
<p>To enable this I had to create/tweak the PolKit rules...</p>
<script src="https://tortugalabs.github.io/embed-like-gist/embed.js?style=github&showBorder=on&showLineNumbers=on&showFileMeta=on&showCopy=on&fetchFromJsDelivr=on&target=https://github.com/alejandroliu/0ink.net/blob/main/snippets/2019/void-installation/_attic_/tweak-polkit-rules.sh"></script>
<h3 id="Using+SLIM" name="Using+SLIM">Using SLIM</h3>
<p>I have switched to <a href="https://github.com/iwamatsu/slim" title="Simple Login Manager">SLiM</a> as the display manager. This is
configured in <code>/etc/slim.conf</code>.</p>
<p>Specifically, I update the login_cmd to be the following:</p>
<pre><code>login_cmd exec /bin/sh -l /etc/X11/Xsession %session</code></pre>
<hr />
My Linux Keyboard Shortcuts
urn:uuid:b7d00dbb-2915-3f4c-f285-e68d5ad4cf6a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In general we try to be similar to MS-Windows shortcuts.</p>
<p>Default bindings (in MATE)</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Alt + F4</td>
<td>Close the active item, or exit the active program</td>
</tr>
<tr>
<td>Alt + Tab</td>
<td>Switch between open items</td>
</tr>
<tr>
<td>Ctrl + Alt + Tab</td>
<td>Use the arrow keys to switch between open items</td>
</tr>
<tr>
<td>Alt + Esc</td>
<td>Cycle through items in the order in which they were opened</td>
</tr>
<tr>
<td>Alt + Spacebar</td>
<td>Open the shortcut menu for the active window</td>
</tr>
</tbody>
</table>
<p>Custom bindings (Window Manager)</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ctrl + Esc</td>
<td>Open the Start menu</td>
</tr>
<tr>
<td>SuperKey + Up arrow</td>
<td>Toggle Maximize the window.</td>
</tr>
<tr>
<td>SuperKey + Down arrow</td>
<td>Remove current app from screen or minimize the desktop window.</td>
</tr>
<tr>
<td>SuperKey + Left arrow</td>
<td>Maximize the app or desktop window to the left side of the screen.</td>
</tr>
<tr>
<td>SuperKey + Right arrow</td>
<td>Maximize the app or desktop window to the right side of the screen.</td>
</tr>
<tr>
<td>SuperKey + D</td>
<td>Display or hide the desktop.</td>
</tr>
</tbody>
</table>
<p>This is configured using this script: <a href="https://github.com/TortugaLabs/void-utils/blob/master/keys/keybindings.sh">keybindings.sh</a></p>
<p>Xbindkeys</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ctrl + Shift + Esc</td>
<td>Open Task Manager</td>
</tr>
<tr>
<td>SuperKey + L</td>
<td>Lock your PC</td>
</tr>
<tr>
<td>SuperKey + E</td>
<td>Open File Manager.</td>
</tr>
<tr>
<td>SuperKey + F</td>
<td>Open search.</td>
</tr>
<tr>
<td>SuperKey + R</td>
<td>Open the Run dialog box.</td>
</tr>
<tr>
<td>SuperKey + KP_Mult</td>
<td>Volume Up</td>
</tr>
<tr>
<td>SuperKey + KP_Minus</td>
<td>Volume Down</td>
</tr>
<tr>
<td>SuperKey + KP_Div</td>
<td>Mute Toggle</td>
</tr>
<tr>
<td>SukerKey + KP_Add</td>
<td>Switch PA output</td>
</tr>
</tbody>
</table>
<p>This makes use of this <a href="https://github.com/TortugaLabs/void-utils/blob/master/keys/xbindkeysrc">xbindkeysrc</a> file.</p>
<p><strong>TODO:</strong> SuperKey + V | Open the clipboard. </p>
Global Windows Keyboard Shorcuts
urn:uuid:9e614134-86e7-44c0-5f92-e94aef8aff89
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Common Window Management Shortcuts</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Alt + F4</td>
<td>Close the active item, or exit the active program</td>
</tr>
<tr>
<td>Alt + Tab</td>
<td>Switch between open items</td>
</tr>
<tr>
<td>Ctrl + Alt + Tab</td>
<td>Use the arrow keys to switch between open items</td>
</tr>
<tr>
<td>Alt + Esc</td>
<td>Cycle through items in the order in which they were opened</td>
</tr>
<tr>
<td>Ctrl + Esc</td>
<td>Open the Start menu</td>
</tr>
<tr>
<td>Alt + Spacebar</td>
<td>Open the shortcut menu for the active window</td>
</tr>
<tr>
<td>Ctrl + Shift + Esc</td>
<td>Open Task Manager</td>
</tr>
<tr>
<td>WinKey</td>
<td>Open or Close Start button</td>
</tr>
<tr>
<td>WinKey + D</td>
<td>Display or hide the desktop.</td>
</tr>
<tr>
<td>WinKey + E</td>
<td>Open File Explorer.</td>
</tr>
<tr>
<td>WinKey + F</td>
<td>Open File Explorer Search (Find).</td>
</tr>
<tr>
<td>WinKey + L</td>
<td>Lock your PC or switch accounts.</td>
</tr>
<tr>
<td>WinKey + M</td>
<td>Minimize all windows.</td>
</tr>
<tr>
<td>WinKey + Shift + M</td>
<td>Restore minimized windows on the desktop.</td>
</tr>
<tr>
<td>WinKey + R</td>
<td>Open the Run dialog box.</td>
</tr>
<tr>
<td>WinKey + V</td>
<td>Open the clipboard.</td>
</tr>
<tr>
<td>WinKey + Up arrow</td>
<td>Maximize the window.</td>
</tr>
<tr>
<td>WinKey + Down arrow</td>
<td>Remove current app from screen or minimize the desktop window.</td>
</tr>
<tr>
<td>WinKey + Left arrow</td>
<td>Maximize the app or desktop window to the left side of the screen.</td>
</tr>
<tr>
<td>WinKey + Right arrow</td>
<td>Maximize the app or desktop window to the right side of the screen.</td>
</tr>
<tr>
<td>WinKey + Home</td>
<td>Minimize all except the active desktop window (restores all windows on second stroke).</td>
</tr>
</tbody>
</table>
<p>Additional Window Management Shortcuts</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>WinKey + Tab</td>
<td>Cycle through programs on the taskbar by using Aero Flip 3-D</td>
</tr>
<tr>
<td>Ctrl+WinKey + Tab</td>
<td>Use the arrow keys to cycle through programs on the taskbar by using Aero Flip 3-D</td>
</tr>
<tr>
<td>Left Alt + Shift</td>
<td>Switch the input language when multiple input languages are enabled</td>
</tr>
</tbody>
</table>
<p>Rare Window Management Shortcuts</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ctrl+Shift</td>
<td>Switch the keyboard layout when multiple keyboard layouts are enabled</td>
</tr>
<tr>
<td>Right or Left Ctrl + Shift</td>
<td>Change the reading direction of text in right-to-left reading languages</td>
</tr>
<tr>
<td>WinKey + B</td>
<td>Set focus in the notification area.</td>
</tr>
<tr>
<td>WinKey + P</td>
<td>Choose a presentation display mode.</td>
</tr>
<tr>
<td>WinKey + T</td>
<td>Cycle through apps on the taskbar.</td>
</tr>
<tr>
<td>WinKey + U</td>
<td>Open Ease of Access Center.</td>
</tr>
<tr>
<td>WinKey + Pause</td>
<td>Display the System Properties dialog box.</td>
</tr>
<tr>
<td>WinKey + Ctrl + Enter</td>
<td>Turn on Narrator.</td>
</tr>
<tr>
<td>WinKey + Plus (+)</td>
<td>Open Magnifier.</td>
</tr>
</tbody>
</table>
<p>CUA Shortcuts</p>
<table>
<thead>
<tr>
<th>Key</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>F1</td>
<td>Display Help</td>
</tr>
<tr>
<td>F2</td>
<td>Rename the selected item</td>
</tr>
<tr>
<td>F3</td>
<td>Search</td>
</tr>
<tr>
<td>Ctrl + F4</td>
<td>Close the active document (in programs that allow you to have multiple documents open simultaneously)</td>
</tr>
<tr>
<td>F5 (or Ctrl + R)</td>
<td>Refresh the active window</td>
</tr>
<tr>
<td>F6</td>
<td>Cycle through screen elements in a window or on the desktop</td>
</tr>
<tr>
<td>F10</td>
<td>Activate the menu bar in the active program</td>
</tr>
<tr>
<td>Shift + F10</td>
<td>Display the shortcut menu for the selected item</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Esc</td>
<td>Cancel the current task</td>
</tr>
<tr>
<td>Delete (or Ctrl + D)</td>
<td>Delete current item</td>
</tr>
<tr>
<td>Shift + Delete</td>
<td>Delete current item (without undo?)</td>
</tr>
<tr>
<td>Alt + Enter</td>
<td>Display properties for the selected item</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Ctrl + C (or Ctrl + Insert)</td>
<td>Copy the selected item</td>
</tr>
<tr>
<td>Ctrl + X</td>
<td>Cut the selected item</td>
</tr>
<tr>
<td>Ctrl + V (or Shift + Insert)</td>
<td>Paste the selected item</td>
</tr>
<tr>
<td>Ctrl + Z</td>
<td>Undo an action</td>
</tr>
<tr>
<td>Ctrl + Y</td>
<td>Redo an action</td>
</tr>
<tr>
<td>Ctrl + A</td>
<td>Select all items in a document or window</td>
</tr>
<tr>
<td>Ctrl + Mouse scroll wheel</td>
<td>Change zoom factor</td>
</tr>
<tr>
<td>Alt + underlined letter</td>
<td>Display the corresponding menu</td>
</tr>
<tr>
<td>Alt + underlined letter</td>
<td>Perform the menu command (or other underlined command)</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Ctrl + Right Arrow</td>
<td>Move the cursor to the beginning of the next word</td>
</tr>
<tr>
<td>Ctrl + Left Arrow</td>
<td>Move the cursor to the beginning of the previous word</td>
</tr>
<tr>
<td>Ctrl + Down Arrow</td>
<td>Move the cursor to the beginning of the next paragraph</td>
</tr>
<tr>
<td>Ctrl + Up Arrow</td>
<td>Move the cursor to the beginning of the previous paragraph</td>
</tr>
<tr>
<td>Ctrl + Shift with an arrow key</td>
<td>Select a block of text</td>
</tr>
</tbody>
</table>
Third Party SimpleNote clients
urn:uuid:ff2fdbc9-c5df-c664-386b-09d336fdc114
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>An inventory of simplenote clients:</p>
<p>SimpleNote clients:</p>
<ul>
<li><a href="https://github.com/insanum/sncli">sncli</a></li>
<li><a href="https://github.com/cpbotha/nvpy">nvpy</a></li>
<li><a href="https://github.com/brittohalloran/notestack">notestack</a></li>
<li><a href="https://github.com/dotemacs/simplenote.el">simplenote.el</a></li>
<li><a href="https://github.com/carlo/simplenote-js">simplenote-js</a></li>
<li><a href="https://www.npmjs.com/package/simplenote-sync">simplenote-sync</a></li>
<li><a href="https://www.npmjs.com/package/simplenote">simplenote pkg</a></li>
</ul>
<p>Current plan is to use a <code>webapp</code> wrapper for the actual
simplenote web site.</p>
Docker on Alpine Linux
urn:uuid:0087de2f-0f7d-c7f3-fc5a-3f83b05eb493
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Alpine Linux Quick installation</p>
<p>See <a href="https://wiki.alpinelinux.org/wiki/Docker">wiki</a> For Alpine Linux > 3.8</p>
<ol>
<li>Un-comment community repo from <code>/etc/apk/repositories</code></li>
<li>apk add docker</li>
<li>rc-update add docker boot</li>
<li>service docker start</li>
</ol>
<p>Optional: (docker compose)</p>
<pre><code>apk add docker-compose</code></pre>
<hr />
<p>Note 2021-03-21: When I tested this, the <code>daemon.json</code> did not
work! Your mileage may vary.</p>
<hr />
<p>Recommended for user namespace isolation (not sure if this works)</p>
<p>Also is good to use data mode (persistent /var) as most docker data is stored there.</p>
<pre><code>adduser -SDHs /sbin/nologin dockremap
addgroup -S dockremap
echo dockremap:100000:65535 | tee /etc/subuid
echo dockremap:100000:65535 | tee /etc/subgid</code></pre>
<p>In <code>/etc/docker/daemon.json</code>:</p>
<pre><code>{
"userns-remap": "dockremap"
}</code></pre>
<p>For more info <a href="https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file">docker docs</a></p>
<h3 id="Test+docker%3A" name="Test+docker%3A">Test docker:</h3>
<ol>
<li>docker version</li>
<li>docker info</li>
<li>docker run hello-world</li>
<li>docker image ls</li>
<li>docker container ls</li>
<li>docker container ls --all</li>
<li>docker container ls --aq</li>
</ol>
<h3 id="Mounting+NFS" name="Mounting+NFS">Mounting NFS</h3>
<p>From docker 17.06, you can mount NFS shares to the container directly when you run it, without the need of extra capabilities</p>
<pre><code>docker run --mount 'type=volume,src=VOL_NAME,volume-driver=local,dst=/LOCAL-MNT,volume-opt=type=nfs,volume-opt=device=:/NFS-SHARE,"volume-opt=o=addr=NFS-SERVER,vers=4,hard,timeo=600,rsize=1048576,wsize=1048576,retrans=2"' -d -it --name mycontainer ubuntu</code></pre>
<h3 id="Useful+options+for+docker" name="Useful+options+for+docker">Useful options for docker</h3>
<ul>
<li><code>docker run -d</code> : Run as a daemon (runs in the background).</li>
<li>NFS mounting (pre 17.06)
<ul>
<li><code>you@host > mount server:/dir /path/to/mount/point</code></li>
<li><code>you@host > docker run -v /path/to/mount/point:/path/to/mount/point</code></li>
</ul></li>
<li><code>docker run -p 4000:80</code> : Forward port 4000 to 80.
So host listens on port 4000 and everything is forwarded to port 80 on the container.</li>
</ul>
<h3 id="Alpine+Linux+relocating+%2Fvar%2Flib%2Fdocker" name="Alpine+Linux+relocating+%2Fvar%2Flib%2Fdocker">Alpine Linux relocating /var/lib/docker</h3>
<p>In the file <code>/etc/conf.d/docker</code> you can add additional command line
options in:</p>
<p><code>DOCKER_OPTS</code></p>
<p>In particular you can use the <code>-g</code> option.</p>
<p>See <a href="https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux">linuxconfig.org article</a></p>
<h3 id="Make+your+own+docker+image%2C+quick+example" name="Make+your+own+docker+image%2C+quick+example">Make your own docker image, quick example</h3>
<ul>
<li><a href="https://docs.docker.com/get-started/part2/#run-the-app">Run the app</a></li>
</ul>
<h3 id="Making+changes+to+an+existing+image" name="Making+changes+to+an+existing+image">Making changes to an existing image</h3>
<ul>
<li><code>docker run -i -t [--name guest] image_name /bin/bash|/bin/sh</code></li>
<li>... make changes to it ...</li>
<li><code>docker stop name|container_id</code></li>
<li>`docker commit -m 'change name' -a 'A N Other' container_id image_name
<ul>
<li>container id: <code>$(docker ps -l -q)</code></li>
</ul></li>
<li><code>docker rm guest|container_id</code></li>
</ul>
<h3 id="Better+way+to+create+container+images%3A" name="Better+way+to+create+container+images%3A">Better way to create container images:</h3>
<ul>
<li><a href="https://serversforhackers.com/c/updating-containers">Updating containers</a></li>
</ul>
<h3 id="Just+use+them..." name="Just+use+them...">Just use them...</h3>
<ul>
<li><a href="https://github.com/maxexcloo/Docker">docker</a></li>
<li><a href="https://www.linuxserver.io/">linuxerver</a></li>
</ul>
Alpine on OTC
urn:uuid:3da212f0-a1a8-9a38-a9c1-73c18f69bde8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>These are just random thoughts nothing really was implemented.</p>
<p>Alpine Linux image</p>
<ul>
<li>
<p>preparation: jq and other deps to <code>/apks/x86_64</code></p>
</li>
<li>
<p><code>/etc/local.d/</code>
<code>cloud-init-lite</code></p>
</li>
</ul>
<ol start="0">
<li>if <code>/etc/network/intefaces</code> exists we abort</li>
<li><code>apk add --force-non-repository /path oniguruma,jq</code> .. restore <code>/etc/apk/world</code></li>
<li><code>udhcpc -b -p /var/run/udhcpc.eth0.pid -i eth0</code></li>
<li>install <code>openssh</code> and start it</li>
<li><code>wget</code> meta data and create <code>/root/.ssh/authorized_keys</code></li>
</ol>
<ul>
<li><code>wget -O- http://169.254.169.254/openstack/YYYY-MM-DD/meta_data.json</code></li>
</ul>
Windows Account Lockouts
urn:uuid:43643bfd-1a8a-5714-3618-56de1349deff
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>To prevent windows lockouts the following can be done:</p>
<ul>
<li>Delete Internet Explorer browsing history</li>
<li>Run the following:
<ul>
<li>Open Start --> Search filed--> Type in Run --> rundll32.exe keymgr.dll, KRShowKeyMgr --> Delete</li>
</ul></li>
<li>Disconnect network shares</li>
<li>Change password</li>
</ul>
Skipping grep when using AWK
urn:uuid:66d04e63-394e-86d4-18df-2b95789d29ce
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Over the years, We've seen many people use this pattern (filter-map):</p>
<pre><code>$ [data is generated] | grep something | awk '{print $2}'</code></pre>
<p>but it can be shortened to:</p>
<pre><code>$ [data is generated] | awk '/something/ {print $2}'</code></pre>
<h2 id="You+%28probably%29+don%27t+need+grep" name="You+%28probably%29+don%27t+need+grep">You (probably) don't need grep</h2>
<p>Following this logic, you can replace a simple grep with:</p>
<pre><code>$ [data is generated] | awk '/something/'</code></pre>
<p>This will <em>implicitly</em> print lines that match the regular expression.</p>
<p>If you feel lost, Here are a series of posts about awk for you:</p>
<ul>
<li><a href="https://blog.jpalardy.com/posts/why-learn-awk/">Why Learn AWK</a></li>
<li><a href="https://blog.jpalardy.com/posts/awk-tutorial-part-1/">Tutorial Part 1</a></li>
<li><a href="https://blog.jpalardy.com/posts/awk-tutorial-part-2/">Tutorial Part 2</a></li>
<li><a href="https://blog.jpalardy.com/posts/awk-tutorial-part-3/">Tutorial Part 3</a></li>
</ul>
<h2 id="Why+would+you+want+to+do+this%3F" name="Why+would+you+want+to+do+this%3F">Why would you want to do this?</h2>
<p>There are a number of reasons:</p>
<ul>
<li>it's shorter to type</li>
<li>it spawns one less process</li>
<li>awk uses modern (read "Perl") regular expressions, by default - like <code>grep -E</code></li>
<li>it's ready to "augment" with more awk</li>
</ul>
<h2 id="What+about+grep+-v%3F" name="What+about+grep+-v%3F">What about <code>grep -v</code>?</h2>
<p><code>grep -v</code> can be done with:</p>
<pre><code>$ [data is generated] | awk '! /something/'</code></pre>
<hr />
<p>Reference: <a href="https://blog.jpalardy.com/posts/skip-grep-use-awk/">jpalardy.com</a></p>
Naming Schemes
urn:uuid:2c03b2fa-79a4-31c8-bcc3-5d6b1ad0f37f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This web site contains list of names of different topics. This
can be used for naming schemes:</p>
<p><a href="https://namingschemes.com/Main_Page">Naming Schemes</a></p>
Build a VR app in 15 minutes
urn:uuid:c65f75dc-45d7-418e-9ae9-d633ee9679d3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In 15 minutes, you can develop a virtual reality application and run
it in a web browser, on a VR headset, or with <a href="https://vr.google.com/daydream/">Google Daydream</a>.
The key is <a href="https://aframe.io/">A-Frame</a>, an open source toolkit built
by the <a href="https://mozvr.com/">Mozilla VR Team</a>.</p>
<h3 id="Test+It" name="Test+It">Test It</h3>
<p>Open <a href="https://theta360developers.github.io/360gallery/">this link</a>
using Chrome or Firefox on your mobile phone.</p>
<p>Put your phone into <a href="https://vr.google.com/cardboard/">Google Cardboard</a>
and stare at a menu square to switch the 360-degree scene.</p>
<p><img src="/images/2018/vr-in-15min-1.png" alt="vr-in-15mins-1" /></p>
<h3 id="Fork+it" name="Fork+it">Fork it</h3>
<p>Fork the <a href="https://github.com/theta360developers/360gallery">sample repository from GitHub</a>.
Change directory into the repo.</p>
<p><img src="/images/2018/vr-in-15min-2.png" alt="vr-in-15mins-2" /></p>
<p>If you have 360-degree images, you can drop them into the img/
sub-directory. If you don't have 360-degree images, you can get
started with the open source <a href="http://hugin.sourceforge.net/">Hugin</a>
panorama photo stitcher. The boilerplate app includes <a href="http://theta360.guide/community-document/community.html">RICOH THETA media</a>
I took at a meetup in San Francisco.</p>
<h3 id="Create+thumbnails" name="Create+thumbnails">Create thumbnails</h3>
<p>The menus in the headset are standard images that are 240x240 pixels.
A-Frame handles the perspective shifts for you automatically.</p>
<p><img src="/images/2018/vr-in-15min-3.png" alt="vr-in-15mins-3" /></p>
<h3 id="Edit+code" name="Edit+code">Edit code</h3>
<p>If you use the same image file names and overwrite 1.jpg in /img, you
do not need to edit the code at all. If you want to extend the program
or modify it with your own filenames, change the id and the src in
index.html to match your files.</p>
<pre><code><body>
<a-scene>
<a-assets>
<img id="kieran" src="img/1.jpg">
<img id="kieran-thumb" crossorigin="anonymous" src="img/kieran-thumb.png">
<img id="christian-thumb" crossorigin="anonymous" src="img/christian-thumb.png">
<img id="eddie-thumb" crossorigin="anonymous" src="img/eddie-thumb.png">
<audio id="click-sound" crossorigin="anonymous" src="https://cdn.aframe.io/360-image-gallery-boilerplate/audio/click.ogg"></audio>
<img id="christian" crossorigin="anonymous" src="img/2.jpg">
<img id="eddie" crossorigin="anonymous" src="img/4.jpg"></code></pre>
<p>Scroll down and edit the section for the menu links.</p>
<pre><code><!-- 360-degree image. -->
<a-sky id="image-360" radius="10" src="#kieran"></a-sky>
<!-- Image links. -->
<a-entity id="links" layout="type: line; margin: 1.5" position="0 -1 -4">
<a-entity template="src: #link" data-src="#christian" data-thumb="#christian-thumb"></a-entity>
<a-entity template="src: #link" data-src="#kieran" data-thumb="#kieran-thumb"></a-entity>
<a-entity template="src: #link" data-src="#eddie" data-thumb="#eddie-thumb"></a-entity>
</a-entity></code></pre>
<h3 id="Upload+to+GitHub+pages" name="Upload+to+GitHub+pages">Upload to GitHub pages</h3>
<p>Add and commit your changes:</p>
<pre><code>git add *
git commit -a -m ?changed images'
git push</code></pre>
<p>Open your app on a mobile phone at <code>http://username.github.io/360gallery</code>.</p>
<h3 id="Next+steps" name="Next+steps">Next steps</h3>
<p>This is a brief taste of A-Frame to illustrate that WebVR is easy and
accessible to web developers. Go to <a href="http://aframe.io">aframe.io</a> to see
more demos. Although the display of 360 images is not true VR, it is
easy, fun, and accessible today. Using 360 images is also a great way
to start to understand the basics of augmented reality.</p>
<p>Take your own pictures with a standard camera and stitch them together
or buy or borrow a 360-degree camera. The camera I used supports
360-degree video files and live streaming.</p>
<h3 id="Troubleshooting" name="Troubleshooting">Troubleshooting</h3>
<p>The application won't run from a local file that you open in your
browser. You must either run a local webserver like Apache2 or upload
it to an external site like GitHub Pages for testing.</p>
<p>If you're using an Oculus Rift or HTC Vive, you may need to install
Firefox Nightly or experimental Chromium builds. See the current
status of your browser at <a href="https://iswebvrready.org/">Is WebVR Ready?</a></p>
<p>360-degree video works on desktop browsers. I've experienced some
glitches on mobile devices. The technology is improving quickly.</p>
<p>Source <a href="https://opensource.com/life/16/11/build-virtual-reality-app">OpenSource</a></p>
Set your google account to automatically delete
urn:uuid:235c8d5f-db64-2dd9-c456-f80810216f9f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Want to share your family photos after your death, but take your
search history to the grave? All that and more is possible with
Google's Inactive Account Manager.</p>
<h3 id="How+You+Can+Control+Your+Information+After+Death" name="How+You+Can+Control+Your+Information+After+Death">How You Can Control Your Information After Death</h3>
<p>It's not nice to think about, but one day, you will die, along with
the keys to your online kingdom. And these days, those online accounts
can hold a lot of stuff you may want to pass on.</p>
<p>Your Google account has a feature tucked deep in the bowels of your
account settings called "Inactive Account Manager". Although the
feature is several years old now, it's practically unknown among
Google users–in a casual survey of people outside our office who had
Google accounts, not one of them was aware of the feature.</p>
<p>Inactive Account Manager is what fans of old spy movies and
psychological thrillers will immediately recognize as a "dead man's
switch". Once activated if you do not interact with your Google
account in X amount of time, then Google's servers will automatically
either notify your trusted contacts and/or share specified data with
those select contacts. Or, at your instruction, it can wipe your account.</p>
<p>In this way, you can ensure that things like family photos stored in
Google Photos are available to your family, that your spouse can will
have full access to your contacts to manage your business affairs, or
that anyone you wish to share your account with upon your demise or
incapacitation can access it legitimately and without resorting to
masquerading themselves as you.</p>
<h3 id="Setting+Up+the+Inactive+Account+Manager" name="Setting+Up+the+Inactive+Account+Manager">Setting Up the Inactive Account Manager</h3>
<p>To set up Inactive Account Manager, make sure you're logged into your
Google account and visit this <a href="https://www.google.com/settings/u/0/account/inactive">page</a>.</p>
<p>You can supply a contact phone number for the person (don't worry, they
won't get an immediate text indicating that you've selected them, so
there won't be any awkward conversations about death triggered by
this process). Then you need to specify what Google data you want to
share with them. We'd encourage you to apply this step selectively,
and not simply check "Select all". Most of us would happily share our
Google Photo collections with our next of kin, after all, but would
prefer to take our search histories private.</p>
<p><img src="/images/2018/google-auto-delete-1.png" alt="google-auto-delete-1" /></p>
<p>The final step is the weightiest one: selecting whether or not your
Google account will be wiped upon the completion of the timeout period.</p>
<p><img src="/images/2018/google-auto-delete-2.png" alt="google-auto-delete-2" /></p>
<p>There is no option to partially delete the data, so make this very
binary decision with care. You cannot, for example, wipe your search
history and email but leave your YouTube content and Blogger posts
intact for posterity. Once the countdown is complete, like a real
dead man's switch, the account data is gone forever.</p>
<p>Source <a href="https://www.howtogeek.com/273488/how-to-set-your-google-account-to-automatically-delete-or-share-upon-your-death/">HowToGee</a></p>
HTML Entities
urn:uuid:afd77851-3a37-7479-4740-d3779c83b92d
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>(remember the ampersand at the start and the semi-colon at the end of each "tag")</p>
<p>| Á | &Aacute; |
| á | &aacute; |
| À | &Agrave; |
| Â | &Acirc; |
| à | &agrave; |
| Â | &Acirc; |
| â | &acirc; |
| Ä | &Auml; |
| ä | &auml; |
| Ã | &Atilde; |
| ã | &atilde; |
| Å | &Aring; |
| å | &aring; |
| Æ | &AElig; |
| æ | &aelig; |
| Ç | &Ccedil; |
| ç | &ccedil; |
| Ð | &ETH; |
| ð | &eth; |
| É | &Eacute; |
| é | &eacute; |
| È | &Egrave; |
| è | &egrave; |
| Ê | &Ecirc; |
| ê | &ecirc; |
| Ë | &Euml; |
| ë | &euml; |
| Í | &Iacute; |
| í | &iacute; |
| Ì | &Igrave; |
| ì | &igrave; |
| Î | &Icirc; |
| Ï | &Iuml; |
| ï | &iuml; |
| ñ | &ntilde; |
| Ó | &Oacute; |
| ó | &oacute; |
| Ò | &Ograve; |
| ò | &ograve; |
| Ô | &Ocirc; |
| ô | &ocirc; |
| Ö | &Ouml; |
| ö | &ouml; |
| Õ | &Otilde; |
| õ | &otilde; |
| Ø | &Oslash; |
| ø | &oslash; |
| ß | &szlig; |
| Þ | &THORN; |
| þ | &thorn; |
| Ú | &Uacute; |
| ú | &uacute; |
| Ù | &Ugrave; |
| ù | &ugrave; |
| Û | &Ucirc; |
| û | &ucirc; |
| Ü | &Uuml; |
| ü | &uuml; |
| Ý | &Yacute; |
| ý | &yacute; |
| ÿ | &yuml; |
| © | &copy; |
| ® | &reg; |
| ™ | &trade; |
| & | &amp; |
| < | &lt; |
| > | &gt; |
| € | &euro; |
| ¢ | &cent; |
| £ | &pound; |
| " | &quot; |
| ‘ | &lsquo; |
| ’ | &rsquo; |
| “ | &ldquo; |
| ” | &rdquo; |
| « | &laquo; |
| » | &raquo; |
| — | &mdash; |
| – | &ndash; |
| ° | &deg; |
| ± | &plusmn; |
| ¼ | &frac14; |
| ½ | &frac12; |
| ¾ | &frac34; |
| × | &times; |
| ÷ | &divide; |
| α | &alpha; |
| β | &beta; |
| ∞ | &infin; |
| | &nbsp; |</p>
<p>Reference:</p>
<ul>
<li><a href="http://www.starr.net/is/type/htmlcodes.html">International Accent Marks and Diacriticals</a></li>
<li><a href="http://www.techdictionary.com/ascii.html">ASCII Hex</a></li>
</ul>
3 Open Source Password Managers
urn:uuid:aa388af9-33b3-145d-9d58-5fc3566719d2
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Keep your data and accounts safe by using a secure open source
password manager to store unique, complex passwords.</p>
<p>Maintaining complex, unique passwords for each site and service you
use is among the most common pieces of advice that security
professionals provide to the public every year.</p>
<p>Yet no matter how many times it is said, it seems like a week doesn't
go by where a high-profile hacking story hits the news, revealing that
users of the service in question more often than not had such secure
passwords as "12345" or "password" as the only wall of protection on
their account.</p>
<p>Or perhaps a user offers up just enough variation on the classic
password selection to get past the minimal rules of the service.
Unfortunately, "Pa$$w0rd!" isn't secure in any meaningful way, either.
At this point, almost every variation of words and phrases strung
together with a few numbers or substitutions is simply too easy for a
password cracking tool to make its way through, and the shorter the
password, the easier.</p>
<p>The best passwords are long, random or pseudo-random combinations of
every possible character allowed, with a different password for each
unique use. But how could a normal person remember the hundreds or
even thousands of individual passwords associated with each account
they've ever created? The short answer is: they can't. And don't even
think about writing a password down in plain text, whether in the
physical world or the digital.</p>
<p>Perhaps the easiest way to keep track of these complex, unique
passwords is with a password manager, which provides easy access to
strong encryption. While proprietary commercial solutions like LastPass
are popular, there are several open source solutions as well. And with
passwords, being able to audit the source code of your password manager
is especially important, as it helps ensure that your passwords are
encrypted properly and are not vulnerable to backdoors.</p>
<p>So without further ado, here are a few open source password managers
we hope you will consider.</p>
<h3 id="KeePass" name="KeePass">KeePass</h3>
<p><a href="http://keepass.info/">KeePass</a> is a GPLv2-licensed password manager,
primarily designed for Windows but also running elsewhere. KeePass
offers multiple strong encryption options, easy exports, multiple
user keys, advanced searching features, and more. Designed for desktop
use, there are plugins that allow direct use from your web browser,
and it can run from a USB stick if you'd prefer to physically carry
your passwords from machine to machine.</p>
<p><a href="https://www.keepassx.org/">KeePassX</a>, which started as a Linux port
of KeePass, is another project you may consider. KeyPassX is compatible
with KeePass 2 password files, and has also been ported to run on
different operating systems.</p>
<h3 id="Padlock" name="Padlock">Padlock</h3>
<p><a href="https://padlock.io/">Padlock</a> is a very new entrant into the world of
open source password managers. Currently available for Windows, Mac,
iOS, and Android, with a Linux client in the works, Padlock is
designed as a "minimalist" password manager. Its
<a href="https://github.com/MaKleSoft/padlock">source</a> is available on GitHub.
The project is also developing a
<a href="https://github.com/maklesoft/padlock-cloud">cloud backend</a>, also open
source, which will be a welcomed addition to anyone tired of managing
password files or setting up syncing across multiple computers.</p>
<h3 id="Passbolt" name="Passbolt">Passbolt</h3>
<p><a href="https://padlock.io/">Passbolt</a> is another relatively new option, with
plugins available for Firefox and Chrome and mobile and command-line
options on the way. Based on OpenPGP, you can check out its online
<a href="https://demo.passbolt.com/auth/login">demo</a> which shows off some of
the features (you'll need to install the plugin for your browser, though).
You can check out the source code on <a href="https://github.com/passbolt">GitHub</a>.</p>
<hr />
<p>Using a password manager that you trust alongside complex passwords is
not a substitute for taking other security precautions, nor is it
foolproof. But for many users, it can be an important part of keeping
your digital life secured. These definitely aren't the only options
out there. There are some older options, like <a href="https://clipperz.is/">Clipperz</a>
and <a href="https://pwsafe.org/">Password Safe</a>, and web-based tools like
<a href="https://github.com/tildaslash/RatticWeb">RatticDB</a> that I would be
interested to try out.</p>
<p>Source <a href="https://opensource.com/article/16/12/password-managers">opensource.com</a></p>
How to encrypt linux partitions with LUKS
urn:uuid:d6550883-f9c9-5b23-c242-004da20e2ca9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>There are plenty of reasons why people would need to encrypt a
partition. Whether they're rooted it in privacy, security, or
confidentiality, setting up a basic encrypted partition on a Linux
system is fairly easy. This is especially true when using LUKS, since
its functionality is built directly into the kernel.</p>
<h2 id="Installing+Cryptsetup" name="Installing+Cryptsetup">Installing Cryptsetup</h2>
<h3 id="Debian%2FUbuntu" name="Debian%2FUbuntu">Debian/Ubuntu</h3>
<p>On both Debian and Ubuntu, the <code>cryptsetup</code> utility is easily
available in the repositories. The same should be true for Mint or
any of their other derivatives.</p>
<pre><code>$ sudo apt-get install cryptsetup</code></pre>
<h3 id="CentOS%2FFedora" name="CentOS%2FFedora">CentOS/Fedora</h3>
<p>Again, the required tools are easily available in both CentOS and
Fedora. These distributions break them down into multiple packages,
but they can still be easily installed using <code>yum</code> and <code>dnf</code>
respectively.</p>
<h4 id="CentOS" name="CentOS">CentOS</h4>
<pre><code># yum install crypto-utils cryptsetup-luks cryptsetup-luks-devel cryptsetup-luks-libs</code></pre>
<h4 id="Fedora" name="Fedora">Fedora</h4>
<pre><code># dnf install crypto-utils cryptsetup cryptsetup-luks</code></pre>
<h3 id="OpenSUSE" name="OpenSUSE">OpenSUSE</h3>
<p>OpenSUSE is more like the Debian based distributions, including
everything that you need with <code>cryptsetup</code>.</p>
<pre><code># zypper in cryptsetup</code></pre>
<h3 id="Arch+Linux" name="Arch+Linux">Arch Linux</h3>
<p>Arch stays true to its "keep it simple" philosophy here as well.</p>
<pre><code># pacman -S cryptsetup</code></pre>
<h3 id="Gentoo" name="Gentoo">Gentoo</h3>
<p>The main concern that Gentoo users should have when installing the
tools necessary for using LUKS is whether or not their kernel has
support. This guide is not going to cover that part, but just be
aware that kernel support is a factor. If your kernel does support
LUKS, you can just emerge the package.</p>
<pre><code># emerge --ask cryptsetup</code></pre>
<h2 id="Setting+Up+The+Partition" name="Setting+Up+The+Partition">Setting Up The Partition</h2>
<p><em>WARNING:</em> <strong>The following will erase all data on the partition being
used and will make it unrecoverable. Proceed with caution.</strong>
From here on, none of this is distribution specific. It will all work
well with any distribution.The defaults provided are actually quite
good, but they can easily be customized. If you really aren't
comfortable playing with them, don't worry. If you do know what you
want to do, feel free.</p>
<p>The basic options are as follows:</p>
<ul>
<li>--cypher: This determines the cryptographic cypher used on the
partition. The default option is aes-xts-plain64</li>
<li>--key-size: The length of the key used. The default is 256</li>
<li>--hash: Chooses the hash algorithm used to derive the key. The
default is sha256.</li>
<li>--time: The time used for passphrase processing. The default is
2000 milliseconds.</li>
<li>--use-random/--use-urandom: Determines the random number generator
used. The default is --use-random.</li>
</ul>
<p>So, a basic command with no options would look like the line below.</p>
<pre><code># cryptsetup luksFormat /dev/sdb1</code></pre>
<p>Obviously, you'd want to use the path to whichever partition that
you're encrypting. If you do want to use options, it would look like
the following.</p>
<pre><code># cryptsetup -c aes-xts-plain64 --key-size 512 --hash sha512 --time 5000 --use-urandom /dev/sdb1</code></pre>
<p><code>Cryptsetup</code> will ask for a passphrase. Choose one that is both
secure and memorable. If you forget it, your data <em>will be lost.</em>
That will probably take a few seconds to complete, but when it's
done, it will have successfully converted your partition into an
encrypted LUKS volume.</p>
<p>Next, you have to open the volume onto the device mapper. This is the
stage at which you will be prompted for your passphrase. You can
choose the name that you want your partition mapped under. It doesn't
really matter what it is, so just pick something that will be easy to
remember and use.</p>
<pre><code># cryptsetup open /dev/sdb1 encrypted</code></pre>
<p>Once the drive is mapped, you'll have to choose a filesystem type for
you partition. Creating that filesystem is the same as it would be on
a regular partition.</p>
<pre><code># mkfs.ext4 /dev/mapper/encrypted</code></pre>
<p>The one difference between creating the filesystem on a regular
partition and an encrypted one is that you will use the path to the
mapped name instead of the actual partition location. Wait for the
filesystem to be created. Then, the drive will be ready for use.</p>
<h2 id="Mounting+and+Unmounting" name="Mounting+and+Unmounting">Mounting and Unmounting</h2>
<p>Manually mounting and unmounting encrypted partitions is almost the
same as doing so with normal partitions. There is one more step in
each direction, though. First, to manually mount an encrypted
partition, run the command below.</p>
<pre><code># cryptsetup --type luks open /dev/sdb1 encrypted
# mount -t ext4 /dev/mapper/encrypted /place/to/mount</code></pre>
<p>Unmounting the partition is the same as a normal one, but you have to
close the mapped device too.</p>
<pre><code># umount /place/to/mount
# cryptsetup close encrypted</code></pre>
<h2 id="Closing" name="Closing">Closing</h2>
<p>There's plenty more, but when talking about security and encryption,
things run rather deep. This guide provides the basis for encrypting
and using encrypted partitions, which is an important first step that
shouldn't be discounted. There will definitely be more coming in this
area, so be sure to check back, if you're interested in going a bit
deeper.</p>
<hr />
<p>Source <a href="https://linuxconfig.org/basic-guide-to-encrypting-linux-partitions-with-luks">linuxconfig</a></p>
Open Source Alternatives to Visio
urn:uuid:1677a68e-2ad9-49de-2f37-0fd302a42399
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Need to create diagrams, flowcharts, circuits, or other kinds of
entity-relationship models? Microsoft Visio is without a doubt the
best software for that, but that doesn't mean it's the best choice
<em>for you.</em></p>
<p>Visio may be the industry standard in the corporate world, but it
comes with a huge drawback: it's expensive ($299 for the standard
version as of this writing). Can't afford that? Then you 'll be happy
to know that several open source alternatives exist for the low, low
price of FREE.</p>
<p>We're going to highlight the two best ones here, but if you don't like
them for whatever reason, you can scroll down to the bottom of the
article for even more options to explore.</p>
<h3 id="Diagram+Creation+with+Dia" name="Diagram+Creation+with+Dia">Diagram Creation with Dia</h3>
<p><a href="http://dia-installer.de/">Dia</a> has been the go-to Visio alternative
for many years. What I like
most about it is the first impression that you get when it launches:
clean, simple, with an interface that's familiar and easy to navigate.
Quite reminiscent of Visio, in fact:</p>
<p><img src="/images/2018/dia-1.png" alt="dia-1" /></p>
<p>You'll be able to create your first diagram in mere minutes.
Drag-and-drop a few symbols onto the canvas, then connect them using
the various types available in the toolbox: lines, zigzags, arcs,
circles, curves, etc.</p>
<p>Dia also supports layers, making it a lot easier to manage complex
charts, and moving elements between layers is as simple as hitting a
hotkey.</p>
<p>Snap to grid, easy resizing, text labels, image insertions -- Dia has
it all. Anything you can do in Visio can be done in Dia as well. The
only real downside is that Dia can't open Visio VSD files, but it can
handle most other diagramming formats like XML, EPS, and SVG.</p>
<p><em>Download -- <a href="http://dia-installer.de/">Dia</a></em> (Free)</p>
<h3 id="Diagram+Creation+with+LibreOffice+Draw" name="Diagram+Creation+with+LibreOffice+Draw">Diagram Creation with LibreOffice Draw</h3>
<p>Have you heard of <a href="http://www.libreoffice.org/">LibreOffice</a>? As far
as open source competitors to Microsoft Office go, you won't find a
more solid and robust alternative.</p>
<p>LibreOffice is far from perfect, but it's a respectable option for
fans of open source software. The app that should interest you is
<em>LibreOffice Draw</em>, the Visio counterpart in this office suite.</p>
<p><img src="/images/2018/draw-1.png" alt="draw-1" /></p>
<p>LibreOffice Draw supplies two things for you: shapes and lines. You
use the shapes to represent diagram entities, and you use the lines to
connect them according to the entity relationships. It's perfect for
creating flowcharts, but you can do more with it if you want (like
desktop publishing or PDF editing).</p>
<p>First you have to open the Drawing toolbar, which you can do through
<em>View > Toolbars > Drawing</em>. Grid snapping is on by default, but you'll
want to change the snapping sensitivity by going to
<em>Tools > Options</em>, navigate to <em>LibreOffice Draw > Grid</em>, change the
values under <em>Resolution</em> to your intended grid size, and change the
values under <em>Subdivision</em> to <em>1</em>.</p>
<p>LibreOffice is surprisingly easy to use once it's set up properly. You
can draw shapes, connectors, lines, curves, symbols, arrows, thought
bubbles, and even 3D objects. If you're already using LibreOffice as
your main office suite, forget Dia and learn to use Draw instead. The
learning curve isn't much worse at all, and you can use it for more
than just diagrams.</p>
<p><em>Download -- <a href="http://www.libreoffice.org/download/libreoffice-fresh/">LibreOffice</a></em> (Free)</p>
<h3 id="Other+Alternatives+to+Visio" name="Other+Alternatives+to+Visio">Other Alternatives to Visio</h3>
<p>Dia and Draw may be the best available right now, but a quick web
search will turn up plenty of competitors that are just as good in
many ways. Keep in mind that these are NOT open source unless
specifically noted in the description.</p>
<ul>
<li><a href="http://www.yworks.com/products/yed">yEd Graph Editor</a> - Very similar
to Dia, except much more powerful and proportionally harder to use.
It has an automatic layout feature that can instantly rearrange a
diagram to be clutter-free and more readable, which is fantastic for
big and complex flowcharts.</li>
<li><a href="https://www.lucidchart.com/">LucidChart</a> - A very solid alternative
to Visio in a lot of ways. It's web-based, so you can access it from
anywhere, and packed full of features that make diagramming easy.</li>
<li><a href="https://www.draw.io/">draw.io</a> - A no-login-required web-based
diagramming tool that may not be the slickest in appearance, but
can certainly get the job done. Diagrams can be saved to Dropbox,
Google Drive, OneDrive, or locally. The interface is clean, the
results are acceptable, and it's open source.</li>
</ul>
<p>Reference: <a href="http://www.makeuseof.com/tag/a-free-open-source-alternative-to-microsoft-visio/">makeuseof</a></p>
10 tips for maiking documentation crystal clear
urn:uuid:6bd7769e-2f0f-a99e-4874-96b5031e910b
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So you've some written excellent documentation. Now what? Now it's
time to go back and edit it. When you first sit down to write your
documentation, you want to focus on what you're trying to say instead
of how you're saying it, but once that first draft is done it's time
to go back and polish it up a little.</p>
<p>One of my favorite ways to edit is to read what I've written aloud.
That's the best way to catch awkward phrasing or sentence structure
that might not stand out when you're reading it to yourself. If it
sounds good when you read it aloud, it probably is. If your
documentation happens to include instructions, you can watch someone
try to follow them. This provides good feedback on what steps are
missing or unclear, particularly if the person is unfamiliar with the
subject.</p>
<h3 id="Active+vs.+passive+voice" name="Active+vs.+passive+voice">Active vs. passive voice</h3>
<p>You should prefer the active voice in most cases. It's okay to be
direct. How do you check for passive voice? Insert the words
"by zombies". For example, it is much clearer to say
"If you click 'yes', you will delete your data" instead of "If you
click 'yes', data will be deleted." Apply the zombie test to these two
examples:</p>
<ul>
<li>"If you click 'yes', you will delete your data by zombies."</li>
<li>"If you click 'yes', data will be deleted by zombies."</li>
</ul>
<p>In the first example there is no doubt that you are the actor. The
second example shows that there is room for misinterpretation. Make it
very clear what actors perform actions.</p>
<h3 id="Eliminate+jargon" name="Eliminate+jargon">Eliminate jargon</h3>
<p>Some jargon terms are unavoidable, but for the sake of clarity you
should avoid them as much as possible. Linking to the definition of a
term the first time you use it is acceptable, and you should also
write your own brief definition. You don't want to rely on the
availability of external sites, or make your readers jump through too
many hoops to understand your documentation.</p>
<h3 id="Check+for+common+mistakes" name="Check+for+common+mistakes">Check for common mistakes</h3>
<p>Question everything you think you know, and take advantage of having
the world at your fingertips and look up everything. For example,
"e.g." means "for example" and "i.e." means "in other words".
"Effect" is a noun (except when it isn't, as the
<a href="https://xkcd.com/326/">comic xkcd demonstrates</a>), and "affect" is a
verb. Use "which" when a clause can be removed from the sentence
without a change in meaning, and "that" when it cannot.</p>
<h3 id="Remove+dangling+modifiers" name="Remove+dangling+modifiers">Remove dangling modifiers</h3>
<p>You have created a dangling modifier when it is unclear which object
is being modified by a word or phrase. A classic example is "Hungry,
the leftover food was devoured". Is Hungry the name of the food? Add
a comma after "food" if that is your meaning. If not, rewrite it to
make your meaning unambiguous: "Your author was hungry and devoured
the leftover food." Organize your sentences carefully; don't force your
readers to guess your meaning.</p>
<h3 id="Check+your+style+guide" name="Check+your+style+guide">Check your style guide</h3>
<p>If your project or company has a style guide for documentation, check
that what you've written conforms to it. One common mistake is the
inappropriate abbreviation of company and project names.</p>
<h3 id="Avoid+unclear+words" name="Avoid+unclear+words">Avoid unclear words</h3>
<p>Do you know how many times I deleted words like "often" and "some"
while writing this article? I don't know exactly how many, but I know
it's a non-zero number. Use words with specific meanings. This is
particularly important when you're trying to convince your reader that
what you're telling them is important. If I say "Following these tips
will make your writing better", that's less convincing than "Following
these tips will increase financial contributions to your project by
45%." If you catch yourself using vague words, ask yourself if you
really understand your topic, or if perhaps you're trying to hide
something.</p>
<h3 id="Check+the+word+order" name="Check+the+word+order">Check the word order</h3>
<p>The English language does not have a formally-defined ordering
structure for modifying nouns, but there is an informal structure
which is described in this <a href="https://twitter.com/MattAndersonNYT/status/772002757222002688">Tweet by Matthew Anderson</a>:
Opinion-size-age-shape-color-origin-material-purpose Noun. It's
something native English speakers know, but don't know we know. Peter
Sokolowski replied with advice to <a href="https://twitter.com/PeterSokolowski/status/773186018317131776">Put the "nounier" words closer to the noun</a>.
If that doesn't help you read the discussion to his reply, which has
numerous examples and explanations.</p>
<h3 id="Remove+words+like+%26quot%3Bjust%26quot%3B+and+%26quot%3Bsimply%26quot%3B" name="Remove+words+like+%26quot%3Bjust%26quot%3B+and+%26quot%3Bsimply%26quot%3B">Remove words like "just" and "simply"</h3>
<p>Technology is not as simple as we like to pretend it is. If you tell
your readers that something is simple and then they can't do it, what
are they to think of themselves? Unless you're writing an infomercial
for the latest must-have kitchen gadget, leave out those words.</p>
<h3 id="Check+your+pronouns" name="Check+your+pronouns">Check your pronouns</h3>
<p>When you say "we", who do you really mean? I have seen documentation
written in what I call "cooking show style", where "Next we click the
whatchamadoozit to fribble the wozulator." When you're writing in a
support context it is especially important to be clear who does what.
If you tell someone "We can change that setting", they expect that
you will do it for them, and not that they can do it with your
guidance. As a general rule, I avoid using first person (I/we) except
when I am talking about myself as the author or the organization I am
representing. When in doubt, refer to yourself in the third person
(e.g. "The author suggests you refer to yourself in the third person").
It might sound overly formal, but it is clear.</p>
<h3 id="Remove+split+infinitives" name="Remove+split+infinitives">Remove split infinitives</h3>
<p>Don't put words in between "to" and a verb. Your documentation is on
a mission to go boldly where no docs have done before. (Starship
captains are excused from this rule).</p>
<p>Reference: <a href="https://opensource.com/life/16/11/tips-for-clear-documentation">opensource life</a></p>
Project Requirements
urn:uuid:02ead301-0527-c89d-c77b-213dd279175d
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is sometimes so true!</p>
<p><img src="/images/2018/requirements_accavdar.jpg" alt="Project Requirements Image" /></p>
Ascii Art Tools
urn:uuid:af44fe69-f151-bdf0-63c6-0fbfd814661c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Here are some resources dealling with ASCII art...</p>
<ul>
<li><a href="https://github.com/dhobsd/asciitosvg">AsciiToSVG</a> - PHP code
to convert ascii art into SVG.</li>
<li><a href="http://asciiflow.com/">AsciiFlow</a> - Web App implement an ascii art
editor.</li>
<li><a href="http://search.cpan.org/dist/App-Asciio/lib/App/Asciio.pm">Asciio</a>
A perl application allows you to draw ASCII diagrams in a modern
(but simple) graphical interface.</li>
<li><a href="http://ditaa.sourceforge.net/">ditaa</a> - Java based ascii art to PNG
converter.</li>
<li><a href="https://github.com/christiangoltz/shaape">shaape</a> - Shaape is an
ascii art to image converter designed to be used with asciidoc.</li>
<li><a href="http://blockdiag.com/en/index.html">blockdiag</a> - blockdiag and its
family generate diagram images from simple text files. (Python)</li>
</ul>
<p>A few more added ones:</p>
<ul>
<li><a href="https://github.com/blampe/goat">GoAT</a></li>
<li><a href="https://github.com/yuzutech/kroki">kroki</a></li>
<li><a href="https://mermaid-js.github.io/mermaid/#/">mermaid-js</a></li>
<li><a href="https://github.com/aafigure/aafigure">aafigure</a></li>
<li><a href="https://github.com/ivanceras/svgbob">svgbob</a></li>
</ul>
6 Cloud-Based Tools To Help You Build A Web App With Ease
urn:uuid:f78895c6-df12-5688-010f-88c02204915b
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In just a relatively short amount of time, building mobile apps has
transformed from a process that included lots of knowledge in
developing into something that almost anyone can do. Cloud-based tools
are quickly becoming the norm for app developers, and these are some
of the highest recommended tools, each one being ideal for certain
developers.</p>
<ol>
<li><a href="http://diy.como.com/features/">Conduit</a></li>
<li><a href="http://softarex.com/">Softarex</a></li>
<li><a href="http://www.appery.io/">Appery.io</a></li>
<li><a href="https://codiqa.com/">Codiqa</a></li>
<li><a href="https://www.knackhq.com/">Knack</a></li>
<li><a href="http://www.kinvey.com/">Kinvey</a></li>
</ol>
<p>Source: <a href="http://www.lifehack.org/468599/6-cloud-based-tools-to-help-you-build-a-web-app-with-ease">6 Cloud-Based Tools To Help You Build A Web App With Ease</a></p>
So the server crashed
urn:uuid:bc34dc77-3736-3af9-7ec5-9680adae51be
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Now we have to re-create things...</p>
custom desktop ideas
urn:uuid:d7ed87f5-639c-d636-4e8d-4e5c0039d2b6
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li>fast boot</li>
</ul>
<h3 id="Window+Manager" name="Window+Manager">Window Manager</h3>
<ul>
<li>Snap windows to border/windows</li>
</ul>
<h3 id="File+Manager" name="File+Manager">File Manager</h3>
<ul>
<li><a href="http://ulf.epplejasper.de/ulfm/ulfm.html">Next style file manager</a></li>
<li><a href="https://wiki.tcl.tk/7697">TkDesk</a> : Note, it use [incr tcl]</li>
<li><a href="https://wiki.tcl.tk/7772">TkMC</a> : Basic file browser functionality</li>
<li><a href="https://wiki.tcl.tk/16045">TOXFile</a>: Another file manager</li>
<li><a href="http://jsish.org/browsex/download/">TkWm</a></li>
</ul>
<h3 id="Notification+and+Tray+areas" name="Notification+and+Tray+areas">Notification and Tray areas</h3>
<h3 id="Launcher" name="Launcher">Launcher</h3>
<h3 id="Applications" name="Applications">Applications</h3>
<ul>
<li>browser</li>
<li>media player</li>
<li>photo/video manager (or webapp)</li>
<li>office apps (open365.io)</li>
<li>ah photo album app</li>
<li><a href="https://launchpad.net/ubuntu/+source/tkpaint">tkpaint</a></li>
<li><a href="http://web.tiscali.it/pas80/retrolook.htm">retrolook</a> is a Window Manager</li>
</ul>
<h3 id="TCL%2FTK" name="TCL%2FTK">TCL/TK</h3>
<ul>
<li><a href="http://www.wjduquette.com/tcl/objects.html">objects</a></li>
<li><a href="http://blog.cleverly.com/permalinks/264.html">tk</a></li>
<li><a href="https://wiki.tcl.tk/_repo/Whim-2399.tar.bz2">whim tarball</a></li>
</ul>
<hr />
<ul>
<li><a href="https://github.com/wmutils/libwm">libwm</a></li>
</ul>
<h2 id="Alt-desktop" name="Alt-desktop">Alt-desktop</h2>
<ul>
<li>Use assist to do a basic install</li>
<li>Set-up xorg</li>
</ul>
<pre><code>pacman -S xorg-server xorg-server-utils xorg-xinit xorg-utils mesa
xterm xorg-twm xorg-xclock</code></pre>
<p>Drivers</p>
<ul>
<li>nvidia</li>
<li>xf86-video-ati</li>
<li>xf86-video-intel</li>
</ul>
<p>Common</p>
<ul>
<li>xf86-input-synaptics : for laptops</li>
<li>ttf-dejavu ttf-droid ttf-freefont ttf-liberation ttf-opensans</li>
<li><em>aur</em> ttf-ms-fonts </li>
</ul>
<p>budgie: budgie-desktop</p>
Retropie
urn:uuid:367b595a-56c7-abe6-2315-bb1e4f0e0098
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li>
<p>DVD player</p>
</li>
<li>
<p>Bluetooth receiver </p>
</li>
<li>
<p>keyboard </p>
</li>
<li>
<p>good keyboard bindings</p>
</li>
<li>
<p>how to exit games</p>
</li>
<li>
<p>convert probox into binding keys</p>
</li>
<li>
<p>where are key codes saved</p>
</li>
</ul>
<hr />
<ul>
<li>Write image:
<ul>
<li>gunzip < retropie-4.3-rpi2_rpi3.img.gz | sudo dd of=/dev/sde bs=4M</li>
</ul></li>
<li>Configure config.txt
<ul>
<li>hdmi_force_hotplug=1</li>
<li>hdmi_drive=2</li>
</ul></li>
<li>Boot and configure keyboard
<ul>
<li>D-Pad => D-Pad</li>
<li>start : mike (F5)</li>
<li>select : menu key</li>
<li>A : OK (enter)</li>
<li>B : back</li>
<li>X : vol-</li>
<li>Y : vol+
... skip all ...</li>
<li>HotKey : power</li>
</ul></li>
<li>Configure SSH
<ul>
<li>sudo raspi-config
<ul>
<li>Interfacing Options</li>
<li>Enable SSH</li>
</ul></li>
</ul></li>
<li>Install Kodi
<ul>
<li>?</li>
</ul></li>
</ul>
<hr />
<ul>
<li>NFS
<ul>
<li>nfs-common should be installed.</li>
<li>sudo vi /etc/default/nfs-common</li>
<li>activate statd</li>
<li>Add systemctl start rpcbind|nfs-common to /etc/rc.local</li>
<li>Doing systemctl enable resulted in a messed up system...</li>
<li>sudo apt-get install autofs</li>
<li>Enable /net hosts from /etc/auto.master ... pointing to /etc/auto.net
<ul>
<li>alvm1-xvdb1d -> /net/alvm1/media/xvdb1/d</li>
<li>alvm1-xvdb1m -> /net/alvm1/media/xvdb1/m</li>
<li>alvm1-xvdb1p -> /net/alvm1/media/xvdb1/p</li>
<li>alvm1-xvdc1p -> /net/alvm1/media/xvdc1/p</li>
<li>alvm1-xvdd1p -> /net/alvm1/media/xvdd1/p</li>
<li>ow1-v1 -> /net/ow1/data/v1</li>
<li>vs1-xvdb1 -> /net/vs1/media/xvdb1</li>
</ul></li>
</ul></li>
<li><a href="https://afterthoughtsoftware.com/products/rasclock">RTC</a>
<ul>
<li>sudo raspi-config
<ul>
<li>Interfacing options</li>
<li>I2C</li>
</ul></li>
<li>Edit /boot/config.txt
<ul>
<li>dtoverlay=i2c-rtc,pcf2127</li>
</ul></li>
</ul></li>
<li><a href="https://mausberry-circuits.myshopify.com/pages/setup">Power control:mausberry</a>
<ul>
<li>git clone <a href="https://github.com/t-richards/mausberry-switch.git">https://github.com/t-richards/mausberry-switch.git</a></li>
<li>autoreconf -i -f</li>
<li>./configure</li>
<li>make</li>
<li>sudo make install</li>
<li>Add also mausberry-switch & to /etc/rc.local</li>
</ul></li>
<li><a href="https://github.com/dillbyrne/es-cec-input">CEC control</a>
<ul>
<li>sudo apt-get install cec-utils</li>
<li>sudo apt-get install python-pip</li>
<li>sudo pip install python-uinput</li>
<li>git clone <a href="https://github.com/dillbyrne/es-cec-input.git">https://github.com/dillbyrne/es-cec-input.git</a></li>
</ul></li>
<li>input
<ul>
<li><a href="https://github.com/MerlijnWajer/uinput-mapper">uinput-mapper</a></li>
<li><a href="http://blog.pi3g.com/2014/03/uinput-mapper-redirecting-keyboard-and-mouse-to-any-linux-system-using-a-raspberry-pi/">redirection HID using uinput-mapper</a></li>
<li><a href="https://www.raspberrypi.org/forums/viewtopic.php?t=85299">forums</a></li>
<li><a href="http://tjjr.fi/sw/python-uinput/">python-uinput</a></li>
<li><a href="http://python-evdev.readthedocs.io/en/latest/tutorial.html#">python evdev</a></li>
<li><a href="https://github.com/pyusb/pyusb">pytusb</a></li>
</ul></li>
</ul>
<hr />
<ul>
<li><a href="https://mausberry-circuits.myshopify.com/pages/setup">mausberry setup</a></li>
<li><a href="https://github.com/RetroPie/RetroPie-Setup/wiki/Running-ROMs-from-a-Network-Share">running ROMS from a share</a></li>
</ul>
<hr />
<ul>
<li>cron backup kodi</li>
<li>document cec input</li>
<li>re-do keyboard bindings</li>
</ul>
using cachefiles on an Linux NFS share
urn:uuid:57bd55f9-99bd-5693-b744-4095f81033c0
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>If you often mount and access a remote NFS share on your system, you
will probably want to know how to improve NFS file access performance.
One possibility is using file caching. In Linux, there is a caching
filesystem called FS-Cache which enables file caching for network file
systems such as NFS. FS-Cache is built into the Linux kernel 2.6.30
and higher.</p>
<p>In order for FS-Cache to operate, it needs cache back-end which
provides actual storage for caching. One such cache back-end is
cachefiles. Therefore, once you set up cachefiles, it will
automatically enable file caching for NFS shares.</p>
<p>In this tutorial, I will describe <strong>how to enable local file caching for
NFS shares</strong> by using cachefiles.</p>
<h2 id="Requirements+for+Setting+Up+CacheFiles" name="Requirements+for+Setting+Up+CacheFiles">Requirements for Setting Up CacheFiles</h2>
<p>One requirement for setting up cachefiles is that local filesystem
support user-defined extended file attributes (i.e., xattr), because
cachefiles use xattr to store extra information for cache maintenance.</p>
<p>If your local filesystem is ext4-type, you don't need to worry about
this since xattr is enabled in ext4 by default.</p>
<p>However, if you are using ext3 filesystem, then you need to mount the
local filesystem with "user_xattr" option. To do so, edit /etc/mtab
to add "user_xattr" mount option to the disk partition that will be
used by cachefiles for file caching. For example, assuming that
<code>/dev/hda1</code> is such a partition:</p>
<pre><code>/dev/hda1 / ext3 rw,user_xattr 0 0</code></pre>
<p>After modifying /etc/fstab, reload it by running:</p>
<pre><code>sudo mount -o remount /</code></pre>
<h2 id="Set+Up+CacheFiles" name="Set+Up+CacheFiles">Set Up CacheFiles</h2>
<p>In order to set up cache back-end using cachefiles, you need to
install <code>cachefilesd</code>, a userspace daemon for managing cachefiles.</p>
<p>To install <code>cachefilesd</code> on Ubuntu or Debian:</p>
<pre><code>sudo apt-get install cachefilesd</code></pre>
<p>To install cachefilesd on CentOS, Fedora or RedHat:</p>
<pre><code>sudo yum install cachefilesd
sudo chkconfig cachefilesd on</code></pre>
<p>After installation, enable cachefilesd by editing its configuration
file as follows.</p>
<pre><code>sudo vi /etc/default/cachefilesd
RUN=yes</code></pre>
<p>Next, mount a remote NFS share with fsc option:</p>
<pre><code>sudo vi /etc/fstab
192.168.1.13:/home/xmodulo /mnt nfs rw,hard,intr,fsc</code></pre>
<p>Alternatively, if you mount the remote NFS share from the command line, specify fsc as a command-line option:</p>
<pre><code>sudo mount -t nfs 192.168.1.13:/home/xmodulo /mnt -o fsc</code></pre>
<p>Finally, restart cachefilesd:</p>
<pre><code>sudo service cachefilesd restart</code></pre>
<p>At this point, file caching should be enabled for the mounted NFS
share, which means that previously accessed files in the mounted
NFS share will be retrieved from local file cache.</p>
<p>If you want to flush NFS file cache for any reason, simply restart
cachefilesd.</p>
<pre><code>sudo service cachefilesd restart</code></pre>
<p>Reference: <a href="http://xmodulo.com/how-to-enable-local-file-caching-for-nfs-share-on-linux.html">xmodule.com</a></p>
VNC desktop
urn:uuid:bf7a63f9-ebaa-c670-fe7a-3efd64603a8f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>IDEA:</p>
<pre><code>Client connects >
< server sends version string (Use 3.3 only)
Client replies with actual verison string >
< server sends security type; NONE
Client send ClientInit (shared flag) >
< sever sens ServerInit (server details) WxHxD Name
=== standard stuff ===</code></pre>
<h3 id="2+VERSIONS" name="2+VERSIONS">2 VERSIONS</h3>
<ul>
<li>kiosk
<ul>
<li>unmodified vncviewer connects to a multiplexer screen</li>
<li>server (in inetd mode) first spawns a Xvnc (in inetd mode) which does a login authentication
and finds an existing desktop or spawns a new one
saves the port and exists</li>
<li>server then connects to the new desktop port and does the VNC handshake. Sends client
Desktop change and name change messages</li>
<li>Forwards everything...</li>
</ul></li>
<li>command
<ul>
<li>User points command to a server.</li>
<li>Script selects a new port.</li>
<li>Ssh to server, look for vnc session, or spawn new one.</li>
<li>netcat to vnc session.</li>
<li>Listen to to new port, and netcat to ssh.</li>
<li>vncviewer to netcat port.</li>
</ul></li>
</ul>
<p>We use only v3.3 because we don't want to mess with security types. Security should be handled by SSH tunnel.</p>
<p>Check if user can login</p>
<ul>
<li><a href="https://wiki.dovecot.org/AuthDatabase/CheckPassword">checkpassword</a></li>
<li><a href="https://github.com/jonabbey/panda-imap/blob/master/src/osdep/unix/ckp_pam.c">unix pam</a></li>
<li><a href="https://github.com/svarshavchik/courier/blob/master/courier-authlib/authpam.c">authpam</a></li>
<li><a href="https://github.com/mozilla-b2g/busybox/blob/master/loginutils/login.c">busybox login</a></li>
<li><a href="https://github.com/mingodad/citadel/blob/master/citadel/auth.c">citadel auth</a></li>
<li><a href="http://www.linuxdevcenter.com/pub/a/linux/2002/04/04/PamModules.html">PamModules</a></li>
</ul>
SimpleNote/Markdown editor
urn:uuid:f98f00d1-4d16-594e-beeb-9aa8ae0bf792
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><strong>We just load SimpleNote as a Desktop WebApp</strong></p>
<ol>
<li>Create first a basic markdown editor
<ul>
<li>styling</li>
</ul></li>
</ol>
<ul>
<li>Hack retext
<ul>
<li>add stuff for multiple views</li>
</ul></li>
<li>nvpy remixed with retext?</li>
<li><a href="http://www.py2exe.org/">Wrapper python for Windows</a></li>
</ul>
<p>Another option:</p>
<ul>
<li>SimpleNote native app + editor</li>
</ul>
<p>Features:</p>
<ul>
<li>Search tags from a menu</li>
<li>Search within document</li>
<li>Poll website for changes</li>
</ul>
<p>Or just use: <a href="https://github.com/lepture/mistune">mistune</a></p>
<p>Basic editors</p>
<ul>
<li><a href="http://www.instructables.com/id/Create-a-Simple-Python-Text-Editor/">simple python text editor</a></li>
<li><a href="http://knowpapa.com/text-editor/">text-editor</a></li>
<li><a href="https://www.binpress.com/tutorial/building-a-text-editor-with-pyqt-part-one/143">text-editor with pyqt</a></li>
<li><a href="http://thelivingpearl.com/2013/07/03/simple-gui-text-editor-in-python/">gui text editor</a></li>
<li><a href="http://code.activestate.com/recipes/578568-plain-text-editor-in-python/">python plain text editor</a></li>
</ul>
<p>Syntax highligthing:</p>
<ul>
<li><a href="http://pygments.org/">pygments</a></li>
<li><a href="http://stackoverflow.com/questions/32058760/improve-pygments-syntax-highlighting-speed-for-tkinter-text">pygments with tkinter</a></li>
<li><a href="http://stackoverflow.com/questions/29688831/pygments-syntax-highlighter-in-python-tkinter-text-widget/30198307#30198307">pygments with tkinter</a></li>
<li><a href="http://stackoverflow.com/questions/29688831/pygments-syntax-highlighter-in-python-tkinter-text-widget">pygments with tkinter</a></li>
<li><a href="http://pygments.org/docs/lexers/#">pygments lexer</a></li>
</ul>
Interesting wordpress plugins
urn:uuid:41e45b60-d0be-72e7-643c-a489f86b6686
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li><a href="https://wordpress.org/plugins/restrict-categories/">restrict categories</a></li>
<li><a href="https://wordpress.org/plugins/access-control-by-category/">access control by category</a></li>
<li><a href="https://wordpress.org/plugins/s2member/">site member</a></li>
<li><a href="https://wordpress.org/plugins/user-access-manager/">user access mgr</a></li>
<li><a href="https://wordpress.org/plugins/paid-memberships-pro/">paid members</a></li>
</ul>
Rollback with YUM History Command
urn:uuid:867e4667-edee-1821-605a-b877cd510dce
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>From <a href="https://www.2daygeek.com/rollback-fallback-updates-downgrade-packages-centos-rhel-fedora/">2daygeek.com</a></p>
<p>Server patching is one of the important task of Linux system administrator to make the system more stable and better performance. All the vendors used to release security/vulnerabilities patches very often, the affected package must be updated in order to limit any potential security risks.</p>
<p>Yum (Yellowdog Update Modified) is RPM Package Management utility for CentOS and Red Hat systems, Yum history command allows administrator to rollback the system to a previous state but due to some limitations, rollbacks do not work in all situations, or The yum command may simply do nothing, or it may remove packages you do not expect.</p>
<p>I advise you to take a full system backup prior to performing any update/upgrade is always recommended, and yum history is NOT meant to replace systems backups. This will help you to restore the system to previous state at any point of time.</p>
<p>n some cases, the hosted applications might not work properly or through some error due to recent patch updates (It could be some library incompatibility or package upgrade), what will be the solution in this case?</p>
<p>Get in touch with App Dev team and figure it out an issue creating library' and packages then do the rollback with help of yum history command.</p>
<p><strong>Note:</strong></p>
<p>Rollback of selinux, selinux-policy-*, kernel, glibc (dependencies of glibc such as gcc) packages to older version is not supported.
Downgrading a system to minor version is not recommended (CentOS 6.9 to CentOS 6.8) which leads to make the system in undesired state
Let' first verify an available updates on system and pick any of the package for this experiment.</p>
<pre><code># yum update
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
epel/metalink | 12 kB 00:00
* epel: mirror.csclub.uwaterloo.ca
base | 3.7 kB 00:00
dockerrepo | 2.9 kB 00:00
draios | 2.9 kB 00:00
draios/primary_db | 13 kB 00:00
epel | 4.3 kB 00:00
epel/primary_db | 5.9 MB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 2.5 MB 00:00
Resolving Dependencies
--> Running transaction check
---> Package git.x86_64 0:1.7.1-8.el6 will be updated
---> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
---> Package httpd.x86_64 0:2.2.15-60.el6.centos.4 will be updated
---> Package httpd.x86_64 0:2.2.15-60.el6.centos.5 will be an update
---> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.4 will be updated
---> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.5 will be an update
---> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
---> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Updating:
git x86_64 1.7.1-9.el6_9 updates 4.6 M
httpd x86_64 2.2.15-60.el6.centos.5 updates 836 k
httpd-tools x86_64 2.2.15-60.el6.centos.5 updates 80 k
perl-Git noarch 1.7.1-9.el6_9 updates 29 k
Transaction Summary
=================================================================================================
Upgrade 4 Package(s)
Total download size: 5.5 M
Is this ok [y/N]: n</code></pre>
<p>As you can see in the above output <code>git</code> package update is available, so we are going to take that. Run the following command to know the version information about the package (current installed version and available update version).</p>
<pre><code># yum list git
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
* epel: mirror.csclub.uwaterloo.ca
Installed Packages
git.x86_64 1.7.1-8.el6 @base
Available Packages
git.x86_64 1.7.1-9.el6_9 updates</code></pre>
<p>Run the following command to update <code>git</code> package from <code>1.7.1-8</code> to <code>1.7.1-9</code>.</p>
<pre><code># yum update git
Loaded plugins: fastestmirror, presto
Setting up Update Process
Loading mirror speeds from cached hostfile
* base: repos.lax.quadranet.com
* epel: fedora.mirrors.pair.com
* extras: mirrors.seas.harvard.edu
* updates: mirror.sesp.northwestern.edu
Resolving Dependencies
--> Running transaction check
---> Package git.x86_64 0:1.7.1-8.el6 will be updated
--> Processing Dependency: git = 1.7.1-8.el6 for package: perl-Git-1.7.1-8.el6.noarch
---> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
--> Running transaction check
---> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
---> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Updating:
git x86_64 1.7.1-9.el6_9 updates 4.6 M
Updating for dependencies:
perl-Git noarch 1.7.1-9.el6_9 updates 29 k
Transaction Summary
=================================================================================================
Upgrade 2 Package(s)
Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-9.el6_9.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-9.el6_9.noarch.rpm | 29 kB 00:00
-------------------------------------------------------------------------------------------------
Total 5.8 MB/s | 4.6 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : perl-Git-1.7.1-9.el6_9.noarch 1/4
Updating : git-1.7.1-9.el6_9.x86_64 2/4
Cleanup : perl-Git-1.7.1-8.el6.noarch 3/4
Cleanup : git-1.7.1-8.el6.x86_64 4/4
Verifying : git-1.7.1-9.el6_9.x86_64 1/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 2/4
Verifying : git-1.7.1-8.el6.x86_64 3/4
Verifying : perl-Git-1.7.1-8.el6.noarch 4/4
Updated:
git.x86_64 0:1.7.1-9.el6_9
Dependency Updated:
perl-Git.noarch 0:1.7.1-9.el6_9
Complete!</code></pre>
<p>Verify updated version of <code>git</code> package.</p>
<pre><code># yum list git
Installed Packages
git.x86_64 1.7.1-9.el6_9 @updates
or
# rpm -q git
git-1.7.1-9.el6_9.x86_64</code></pre>
<p>As of now, we have successfully completed package update and got a package for rollback. Just follow below steps for rollback mechanism.</p>
<p>First get the yum transaction id using following command. The below output clearly shows all the required information such transaction id, who done the transaction (i mean username), date and time, Actions (Install or update), how many packages altered in this transaction.</p>
<pre><code># yum history
or
# yum history list all
Loaded plugins: fastestmirror, presto
ID | Login user | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
13 | root | 2017-08-18 13:30 | Update | 2
12 | root | 2017-08-10 07:46 | Install | 1
11 | root | 2017-07-28 17:10 | E, I, U | 28 EE
10 | root | 2017-04-21 09:16 | E, I, U | 162 EE
9 | root | 2017-02-09 17:09 | E, I, U | 20 EE
8 | root | 2017-02-02 10:45 | Install | 1
7 | root | 2016-12-15 06:48 | Update | 1
6 | root | 2016-12-15 06:43 | Install | 1
5 | root | 2016-12-02 10:28 | E, I, U | 23 EE
4 | root | 2016-10-28 05:37 | E, I, U | 13 EE
3 | root | 2016-10-18 12:53 | Install | 1
2 | root | 2016-09-30 10:28 | E, I, U | 31 EE
1 | root | 2016-07-26 11:40 | E, I, U | 160 EE</code></pre>
<p>The above command shows two packages has been altered because git updated it' dependence package too <code>perl-Git</code>. Run the following command to view detailed information about the transaction.</p>
<pre><code># yum history info 13
Loaded plugins: fastestmirror, presto
Transaction ID : 13
Begin time : Fri Aug 18 13:30:52 2017
Begin rpmdb : 420:f5c5f9184f44cf317de64d3a35199e894ad71188
End time : 13:30:54 2017 (2 seconds)
End rpmdb : 420:d04a95c25d4526ef87598f0dcaec66d3f99b98d4
User : root
Return-Code : Success
Command Line : update git
Transaction performed with:
Installed rpm-4.8.0-55.el6.x86_64 @base
Installed yum-3.2.29-81.el6.centos.noarch @base
Installed yum-plugin-fastestmirror-1.1.30-40.el6.noarch @base
Installed yum-presto-0.6.2-1.el6.noarch @anaconda-CentOS-201207061011.x86_64/6.3
Packages Altered:
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
history info</code></pre>
<p>Fire the following command to Rollback the <code>git</code> package to the previous version.</p>
<pre><code># yum history undo 13
Loaded plugins: fastestmirror, presto
Undoing transaction 53, from Fri Aug 18 13:30:52 2017
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
Loading mirror speeds from cached hostfile
* base: repos.lax.quadranet.com
* epel: fedora.mirrors.pair.com
* extras: repo1.dal.innoscale.net
* updates: mirror.vtti.vt.edu
Resolving Dependencies
--> Running transaction check
---> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
---> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
---> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
---> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Downgrading:
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k
Transaction Summary
=================================================================================================
Downgrade 2 Package(s)
Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 29 kB 00:00
-------------------------------------------------------------------------------------------------
Total 3.4 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4
Removed:
git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9
Installed:
git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6
Complete!</code></pre>
<p>After rollback, use the following command to re-check the downgraded package version.</p>
<pre><code># yum list git
or
# rpm -q git
git-1.7.1-8.el6.x86_64</code></pre>
<h3 id="Rollback+Updates+using+YUM+downgrade+command" name="Rollback+Updates+using+YUM+downgrade+command">Rollback Updates using YUM downgrade command</h3>
<p>Alternatively we can rollback an updates using YUM downgrade command.</p>
<pre><code># yum downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6
Loaded plugins: search-disabled-repos, security, ulninfo
Setting up Downgrade Process
Resolving Dependencies
--> Running transaction check
---> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
---> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
---> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
---> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Downgrading:
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k
Transaction Summary
=================================================================================================
Downgrade 2 Package(s)
Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 28 kB 00:00
-------------------------------------------------------------------------------------------------
Total 3.7 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4
Removed:
git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9
Installed:
git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6
Complete!</code></pre>
<p><strong>Note</strong> : You have to downgrade a dependence packages too, otherwise this will remove the current version of dependency packages instead of downgrade because the downgrade command cannot satisfy the dependency.</p>
<h3 id="For+Fedora+Users" name="For+Fedora+Users">For Fedora Users</h3>
<p>Use the same above commands and change the package manager command to DNF instead of YUM.</p>
<pre><code># dnf list git
# dnf history
# dnf history info
# dnf history undo
# dnf list git
# dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6</code></pre>
SeedBoxes
urn:uuid:15866a6d-a5ca-a0c1-aaac-57740d97f459
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A seedbox is a dedicated server at a high speed datacenter with a
public IP address for the downloading and seeding of bittorrent files.
Persons who have access to a seedbox can download these files to
their personal computers at any time and from any place that has an
internet connection.</p>
<p>References:</p>
<ul>
<li><a href="https://www.rapidseedbox.com/#pricing">rapidseedbox</a></li>
<li><a href="https://cheapseedboxes.com/top-10-seedbox-best-providers-cheap/">cheepseedboxes</a></li>
</ul>
Centos Install notes
urn:uuid:5f7d2603-4ae7-1101-3f12-015f4b3a7be3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Set-up <code>local.repo</code></p>
<p>yum installs:</p>
<ul>
<li>nfs-utils autofs</li>
<li>@x11</li>
<li>@xfce</li>
<li>wget</li>
<li>dejavu-sans-fonts dejavu-sans-mono-fonts dejavu-serif-fonts</li>
<li>xorg-x11-fonts-{Type1,misc,75dpi,100dpi}</li>
<li>bitmap-console-fonts bitmap-fixed-fonts bitmap-fonts-compat bitmap-lucida-typewriter-fonts</li>
<li>ucs-miscfixed-fonts urw-fonts</li>
<li>open-sans-fonts</li>
<li>webcore-fonts webcore-fonts-vista</li>
<li>liberation-mono-fonts liberation-sans-fonts liberation-serif-fonts</li>
<li>bitstream-vera-sans-fonts bitstream-vera-serif-fonts</li>
<li>gnu-free-{mono,sans,serif}-fonts</li>
<li>tk</li>
<li>firefox</li>
<li>mplayer ffmpeg alsa-utils </li>
<li>xsensors xfce4-sensors-plugin</li>
<li>keepassx</li>
<li>git</li>
</ul>
<p><a href="https://blog.packagecloud.io/eng/2015/05/11/building-rpm-packages-with-mock/">Building RPM packages with mock</a>
and <a href="https://github.com/perfsonar/project/wiki/CentOS-Mock-Overview">Centos mock overview</a></p>
<p><a href="https://fedoraproject.org/wiki/Using_Mock_to_test_package_builds#Building_packages_that_depend_on_packages_not_in_a_repository">Using mock with fedora</a></p>
<p><a href="http://blog.packagecloud.io/eng/2015/05/11/building-rpm-packages-with-mock/">packaging rpms with mock</a></p>
<p><a href="https://gist.github.com/oussemos/cf81d86a446544bfa9c92f3576306aff">Adding trusted certs…</a> or <a href="https://access.redhat.com/solutions/1549003">rehat solution</a></p>
<p><a href="http://www.devdungeon.com/content/how-use-ssl-sockets-php">php using ssl</a></p>
<p><a href="https://www.tecmint.com/install-google-chrome-on-redhat-centos-fedora-linux/">Chrome on Centos7</a> from <a href="https://www.google.com/linuxrepositories/">google repos</a>.</p>
<h3 id="travis+ci+installation" name="travis+ci+installation">travis ci installation</h3>
<ul>
<li>Install ruby (must be greater than 1.9.3, 2.0.0 recomended)</li>
<li>Additional dependencies (through yum)</li>
<li>ruby-ffi</li>
</ul>
<p>As normal user:</p>
<ul>
<li><code>gem install travis -v 1.8.8 --no-rdoc --no-ri</code></li>
</ul>
<p>For a new application...</p>
<pre><code>travis enable
travis settings builds_only_with_travis_yml -t</code></pre>
<p><a href="https://github.com/proot-me/PRoot/releases">PRoot</a></p>
<p>project
-module: centos/alpine
-module: proot</p>
<ul>
<li>
<p><a href="https://docs.travis-ci.com/user/customizing-the-build/">travis ci custom build</a></p>
</li>
<li>
<p>install: install any dependencies required</p>
</li>
<li>
<p>script: run the build script</p>
</li>
</ul>
<h2 id="Nameing+conventions" name="Nameing+conventions">Nameing conventions</h2>
<p>Naming tmXXXXYYYYRRRR</p>
<ul>
<li>tmc7r1 : centos 7 template</li>
<li>tmwin7r1 : not using these, I think it is better to do fresh install...</li>
<li>tmal3r1 : alpine linux template.
<ul>
<li>after boot:
<ol>
<li>mount xvda1 on /media/xvda1 and run setup-alpine</li>
<li>modify /etc/inittab /etc/securetty to allow ttyS0 (console) login</li>
<li>may need to switch cdrom as needed.</li>
</ol></li>
</ul></li>
</ul>
<p>Sample vm names:</p>
<ul>
<li>winvm1 : windows vm</li>
<li>cvm2 : centos vm</li>
<li>alvm3 : alpine linux vm</li>
</ul>
<p>More complete guides: </p>
<ul>
<li><a href="https://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a_Raspberry_Pi">Rasperry Pi router</a></li>
<li><a href="https://wiki.alpinelinux.org/wiki/How-To_Alpine_Wall">AWall</a></li>
<li><a href="https://wiki.alpinelinux.org/wiki/Nginx">webserver + php</a></li>
<li><a href="https://wiki.alpinelinux.org/wiki/Configure_Networking">Networking</a></li>
<li><a href="https://egustafson.github.io/ipv6-dhcpv6.html">DNSMasq IPv6 stuff</a></li>
<li><a href="https://hveem.no/using-dnsmasq-for-dhcpv6">Another DNSmasq + ipv6</a></li>
</ul>
<p>Second wifi on OpenWrt</p>
<ul>
<li><a href="https://cucumberwifi.io/community/tutorials/openwrt-adding-second-ssid.html">cucumber-wifi</a></li>
<li><a href="https://www.smallbusinesstech.net/more-complicated-instructions/openwrt/hosting-two-wifi-networks-on-one-openwrt-router">smalltech</a></li>
</ul>
android development
urn:uuid:8d4562b7-60e0-79f0-4f0d-ca25d62bc0b6
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Android devs</p>
<h2 id="Install+JAVA" name="Install+JAVA">Install JAVA</h2>
<pre><code>yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel</code></pre>
<h2 id="Install+SDK+Tools%3A" name="Install+SDK+Tools%3A">Install SDK Tools:</h2>
<p>Download the sdk-tools zip from <a href="https://developer.android.com/studio/index.html#download">here</a></p>
<pre><code>mkdir /opt/android
cd /opt/android</code></pre>
<p>The sdk should go under <code>/opt/android/tools</code></p>
<pre><code>unzip sdk-tools.zip
sudo chmod a+x $(sudo find . -type f -executable )</code></pre>
<p>Create a <code>/etc/profile.d</code> or just modify <code>PATH</code> directly</p>
<pre><code>ANDROID_HOME=/opt/android
$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$ANDROID_HOME/tools</code></pre>
<p>Then run:</p>
<pre><code>sudo /opt/android/tools/bin/sdkmanager tools</code></pre>
<p>This refreshes the tools and makes sure some basic stuff is there.</p>
<h2 id="SDK+Manager+packages" name="SDK+Manager+packages">SDK Manager packages</h2>
<pre><code>sdkmanager --list</code></pre>
<p>TO check the list and install (use sudo...)</p>
<pre><code>build-tools;<version>
platforms;android-<version>
'system-images;android-19;google_apis;x86'</code></pre>
<p>Install latest, and others as required</p>
<p>This is needed by cordova and some applications:</p>
<h3 id="Gradle" name="Gradle">Gradle</h3>
<p>Choose binary only <a href="https://gradle.org/releases">releases</a></p>
<pre><code>cd /opt
unzip gradle-bin.zip
ln -s gradle-<version> gradle</code></pre>
<p>Add gradle bin to <code>PATH</code></p>
<p>When creating AVDs need to tweak config.ini (inside $HOME/.android/avd/<name>.avd</p>
<pre><code>hw.gpu.enabled=yes
hw.gpu.mode=host</code></pre>
<hr />
<h2 id="CORDOVA+INSTALL" name="CORDOVA+INSTALL">CORDOVA INSTALL</h2>
<pre><code>sudo yum install nodejs npm (From EPEL)
sudo npm install -g cordova
sudo npm install -g typescript (Optional)</code></pre>
Securing rsync on ssh
urn:uuid:eebc6591-3a4d-0d2a-408c-9f4f055c74bb
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Reference: <a href="http://positon.org/rsync-command-restriction-over-ssh">positon.org</a></p>
<p>You have 2 systems and you want to set up a secure backup with rsync + SSH of one system to the other.</p>
<p>Very simply, you can use:</p>
<pre><code>backup.example.com# rsync -avz --numeric-ids --delete root@myserver.example.com:/path/ /backup/myserver/</code></pre>
<p>To do the backup, you have to be root on the remote server, because some files are only root readable.</p>
<p>Problem: you will allow backup.example.com to do anything on myserver.example.com, where just read only access on the directory is sufficient.</p>
<p>To solve it, you can use the command="" directive in the authorized_keys file to filter the command.</p>
<p>To find this command, start rsync adding the -e'ssh -v' option:</p>
<pre><code>rsync -avz -e'ssh -v' --numeric-ids --delete root@myserver.example.com:/path/ /backup/myserver/ 2>&1 | grep "Sending command"</code></pre>
<p>You get a result like:</p>
<pre><code>debug1: Sending command: rsync --server --sender -vlogDtprze.iLsf --numeric-ids . /path/</code></pre>
<p>Now, just add the command before the key in /root/.ssh/authorized_keys:</p>
<pre><code>command="rsync --server --sender -vlogDtprze.iLsf --numeric-ids . /path/" ssh-rsa AAAAB3NzaC1in2EAAAABIwAAABio......</code></pre>
<p>And for even more security, you can add an IP filter, and other options:</p>
<pre><code>from="backup.example.com",command="rsync --server --sender -vlogDtprze.iLsf --numeric-ids . /path/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding ssh-rsa AAAAB3NzaC1in2EAAAABIwAAABio......</code></pre>
<p>Now try to open a ssh shell on the remote server.. and try some unauthorized rsync commands...
<em>Notes:</em></p>
<ul>
<li>Beware that if you change rsync command options, change also the authorized_keys file.</li>
<li>No need for complex chroot anymore.</li>
</ul>
<p>See also:</p>
<ul>
<li>man ssh #/AUTHORIZED_KEYS FILE FORMAT</li>
<li>man rsync</li>
<li>view /usr/share/doc/rsync/scripts/rrsync.gz (restricted rsync, allows you to manage allowed options precisely)</li>
</ul>
Desktop environments on Centos 7
urn:uuid:e931b051-af56-147a-9f62-8ff0673beb14
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>These are commands to install different Desktop environments
on Centos7</p>
<h3 id="Gnome" name="Gnome">Gnome</h3>
<pre><code>yum groupinstall 'GNOME Desktop'</code></pre>
<h3 id="KDE" name="KDE">KDE</h3>
<pre><code>yum groupinstall "KDE Plasma Workspaces"</code></pre>
<h3 id="Cinnamon" name="Cinnamon">Cinnamon</h3>
<pre><code>yum install epel-release
yum --enablerepo=epel install cinnamon</code></pre>
<h3 id="MATE" name="MATE">MATE</h3>
<pre><code>yum install epel-release
yum --enablerepo=epel groupinstall "MATE Desktop"</code></pre>
<h3 id="XFCE" name="XFCE">XFCE</h3>
<pre><code>yum install epel-release
yum --enablerepo=epel groupinstall XFCE</code></pre>
Telegram
urn:uuid:318da1c6-832b-f1fb-6c9a-c85a0617da8d
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="https://telegram.org/">Telegram</a> is a messenger designed to overcome
the limitations of other messengers like WhatsApp or similar ones. It
is different and better than other messengers on more than one level.
A few of the important features that make it stand out among other
messengers are:</p>
<ul>
<li><em>Open API</em>. This enables the users to develop their own versions of
the telegram.</li>
<li><em>Open Protocol</em>. This protocol is highly secure and will protect
important messages from hackers</li>
<li><em>Free</em>. As is freedom as well as subscription. There will be no ads
and the developers intend to keep it that way forever.</li>
<li><em>It has unlimited cloud storage</em>. There is no limit on the size of
media or chats.</li>
</ul>
<p>Available on all devices like tablets, smartphones, desktop ( Mac,
Windows, and Linux as well) and there is web version too...
Telegram is not here for profits but instead to provide users an
alternative to all those other messengers which don' value privacy. We
will now look at setting up of the desktop telegram on Linux. </p>
<p>I used arch with deepin desktop, but telegram should work in any Linux
based PC.</p>
<p>First, Visit the telegram website and choose telegram for PC/Mac/Linux.</p>
<p>Then, the website should automatically detect your Linux and show you
the download link. Or you can select the one that suits your PC. Keep
in mind the CPU architecture 32 bit or 64 bit. Click on Download.</p>
<p>From <a href="http://www.linuxandubuntu.com/home/telegram-messenger-on-linux-telegram-linux">linuxandubuntu</a></p>
Free Clipart sites
urn:uuid:75b18339-bc91-1c3e-e1aa-c85a12fb1d72
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In 2014, Microsoft <a href="https://www.makeuseof.com/tag/clip-art-gone-heres-find-free-images-instead/">killed and buried Clipart</a>
in the digital graveyard.</p>
<p>Clipart had outlived its usefulness as users relied more on search
engines than Microsoft' somewhat limited supply through the Office
suite.</p>
<p>Today' clipart needs to be modern, colorful, and less cartoonish. An
online search for clip art images will net you files that are of far
better quality and relevance. But you need shortcuts when searching for
the right image is a daily workout.</p>
<p>These are <em>13 of the top websites</em> for free clipart downloads. Browse
through them and bookmark the ones that meet your needs.</p>
<ul>
<li><a href="http://www.clker.com/">clker.com</a> : This site is among the more neatly designed ones you can expect to find for clipart hunting. The images are free to download and use. All images are in the public domain, so feel free to use them anywhere after agreeing to the site' terms of use policy.</li>
<li><a href="https://www.vecteezy.com/free-vector/vector-clipart-free-download">Vecteezy</a> : Vecteezy
covers the gamut of vector art -from vector icons to vector patterns. Both free and premium
image files are available for use. You can visit the small section for free vector clipart
downloads which is still choc-full of nearly <em>75000</em> assets. The community contributions keep the
artwork fresh and updated.</li>
<li><a href="http://etc.usf.edu/clipart/">ClipArt Etc.</a> : Start with any of the <em>71,500</em> clip art images on this simple site. The site keeps its focus on classroom friendly images that are appropriate for school websites, class projects, student reports, homework assignments, presentations, posters, art projects, picture books, bulletin boards, and creating teaching aids. For instance, go over to the Fractions collection that has nearly 500 files to help demonstrate math concepts in class.</li>
<li><a href="http://www.webweaver.nu/">Webweaver' Free Clipart</a> : Free images and animations without the annoyance of pop ups or registration. That' how this simple site announces itself, and it lives up to its name. Clip art categories include Holidays, Animals & Nature, Celebrations, Historical, and even a dedicated one for Fantasy. No Game of Thrones here, but I spotted a few that could be passed off as Gandalf from LOTR.</li>
<li><a href="http://www.clipartof.com/">Clipart Of</a> : Clipart Of is a stock image website offering royalty-free vector, cartoon and 3D files, illustrations and clipart from artists around the world. One glance at the home page tells you that the quality is better than the average. But you have to pay for them.</li>
<li><a href="http://www.artvex.com/">ArtVex</a> : The free clip art database gives you more than <em>10,000</em> original images to choose from. The files are neatly categorized and you can also use the Google custom search engine to go through the vast collection. Some useful categories include Shapes Signs & Symbols, Math, Callouts, and Stickmen & Figures.</li>
<li><a href="http://www.clipartlord.com/">Clipart Lord</a> : There' nothing remarkable about the site apart from the fact that someone has gone to the pains of collecting the best clip art found over the web. If you are a clip art buff then the simple site is worth a bookmark. I usually find some excellent files here and use them in my projects.</li>
<li><a href="http://www.vectorportal.com/StockVectors/Clip-art/">Vector Portal</a> : The site
creates and showcases free stock vectors which designers can use in commercial
projects. The library includes a good collection of stock vectors and clip art
images which you can use with an attribution. Vector Portal allows you non-exclusive,
non-transferable right to use and modify its images which carry a "or commercial use" license. Most of the files are in the EPS and AI formats (supported by Adobe Illustrator).</li>
<li><a href="http://www.freepngimg.com/">Free PNG IMG</a> : You can you can download free PNG images, pictures, icons and clip arts in high resolution quality. Categories covered include animals, artistic icons, cars, cartoons, clothing, electronics, games, fantasy, and more. It is effortless to drill down to the one you want with the detailed category breakdown. Some of the icons you might not find on most sites cover learning, internet, and entertainment related themes.</li>
<li><a href="http://www.pdclipart.org/">PD Clip Art</a> : Public Domain Clip Art is a growing trove of clipart images which requires no sign-up to download and use.</li>
<li><a href="http://all-free-download.com/free-vector/vector-clip-art/">All Free Download</a> : More than <em>21000</em> clip art choices organized in 700 pages should be enough to keep you busy. You don' have to rummage around as all files are organized around tags. Most files are in the Adobe Illustrator (AI) and Encapsulated PostScript (EPS) format. As new files are added daily, sort them by newest first. Any download is free for commercial use with attribution.</li>
<li><a href="http://school.discoveryeducation.com/clipart/">Discovery Education</a> : Discovery is one of the more <a href="https://www.makeuseof.com/tag/10-video-websites-kids-safe-fun/">kid-friendly websites</a> you can visit on the web. Go straight to the space devoted for clip arts tucked away among their educational videos and curricular resources. The graphics are excellent and cover most of the categories (including animated clip arts) you would need to complete any school assignment. The designs are consistent because they are made by one designer.</li>
<li><a href="https://commons.wikimedia.org/wiki/Main_Page">Wikimedia Commons</a> : This is arguably the largest collection of free images on the web. So far there are <em>38,205,390</em> freely usable media files and it is always open for contribution from anyone in the world. Use the search box at the top to ferret the files you want to use. As a regular user, you can take advantage of the syndicated feeds to grab the latest images that come into the archive.</li>
</ul>
<p>From <a href="http://www.makeuseof.com/tag/the-best-websites-for-free-clipart-downloads/">makeuseof</a></p>
Anti Roboto skills
urn:uuid:6fdc7f88-2ad7-82d7-548b-ce30643918ae
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Losing your job to robots is no longer a sci-fi fantasy.</p>
<p>Some estimates say, robots may take over more than five million jobs
across 15 developed countries. Machines could account for more than
half the workforce in places like Cambodia and Indonesia, particularly
in the garment industry.</p>
<p>While such information has led many people to seek out higher-tech
skills, others have said we need a stronger emphasis on trade skills
to combat the high competition in tech fields. In one 2016 survey, 60
percent of respondents wanted more emphasis on Shop classes in high
schools, while a 2015 Gallup poll found that 90 percent of parents
want computer sciences emphasized in schools.</p>
<p>The good news. There are some skills robots can't embody, and if you
have them, there's no need to worry about losing your job due to
robotic advancements. Better yet, many of them are transferrable,
meaning they can help you advance your career, even if you need to
change industries.</p>
<p>Here are eight skills that can keep your job from being handed off
to a robot.</p>
<ol>
<li>Complex Problem-Solving Skills</li>
<li>Project and Personnel Management Skills</li>
<li>Athletic Skills</li>
<li>Confidence and Leadership Skills</li>
<li>Critical Thinking and Judgment Skills</li>
<li>Empathy Skills</li>
<li>Listening Skills</li>
<li>Robotics and Hardware Repair</li>
</ol>
<p>Source: <a href="http://www.makeuseof.com/tag/job-skills-robot/">makeuseof</a></p>
stop procrastinating
urn:uuid:a2d77deb-f1d6-56a6-e348-f6ba95afeab5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>We are all guilty of procrastinating from time to time--here's always
something more interesting than the work in hand. We usually think it's
no big deal, since deadline is our biggest inspiration, and we do our
best work when we're inspired. We may even joke about it.</p>
<p>However, procrastination is a massive waste of time as it turns out.</p>
<p>A survey in 2015 found that <strong>on average, a person loses over 55 days
per year</strong> procrastinating, wasting around 218 minutes every day on
doing unimportant things.</p>
<p>Here' the maths:</p>
<p><em>218 minutes/day x 365 = 79570 minutes = 55.3 days</em></p>
<p>That' a lot of time wasted!</p>
<p>If you think you need to have a lot of willpower to get productive,
you're wrong.</p>
<p>We're human beings, we all have limited willpower. Our brain is wired
to instant gratification. Temporary rewards are always more tempting
to us.</p>
<p>When you make plans, you're making plans for your future self.
You'll only experience the benefits in future. But most of the time,
the present moment can give you the immediate reward you want, making
you want to delay the plans and just enjoy the moment.</p>
<p>This is why relying on our willpower to stop procrastination will never
be effective. What we should do is to look into the root causes of
procrastination and start with the small things we can do every day
and build a habit of staying productive.</p>
<p>Basically there'e 5 common reasons why we procrastinate.</p>
<p>Identify the real reason and find out how to stop procrastination
accordingly:</p>
<h3 id="Type+1%3A+The+Perfectionist" name="Type+1%3A+The+Perfectionist">Type 1: The Perfectionist</h3>
<p>They are the ones who pay too much attention to the minor details.
The perfectionist is afraid to start a task because they get stressed
out about getting every detail right. They can also get stuck in the
process even when they've started since they're just too scared to move
on.</p>
<h4 id="Advice+for+the+Perfectionist%3A" name="Advice+for+the+Perfectionist%3A">Advice for the Perfectionist:</h4>
<p>Instead of letting your obsession with details take up all your time,
be clear about the purpose of your tasks and assign a time limit to
each task. This will force you to stay focused and finish your task
within the time frame.</p>
<p>For example,</p>
<p>If you're going to write a report, be clear about the purpose of the
report first.</p>
<p>If the goal of having the report is to clearly present the changes
in data over the past few months, don't sweat too much about writing
up a lot of dainty words; rather, focus more on the figures and
charts. Just make sure the goal can be reached, and there's really
no need to work on things that don't help you achieve the ultimate goal.</p>
<h3 id="Type+2%3A+The+Dreamer" name="Type+2%3A+The+Dreamer">Type 2: The Dreamer</h3>
<p>This is someone who enjoys making the ideal plan more than taking
actions. They are highly creative, but find it hard to actually finish
a task.</p>
<h4 id="Advice+for+the+Dreamer" name="Advice+for+the+Dreamer">Advice for the Dreamer</h4>
<p>To stop yourself from being carried away by your endless imagination,
get your feet back on the ground by setting specific (and achievable)
goals for each day based on the SMART framework. Set a goal and break
down the plan into small tasks that you can take actions right away.</p>
<p>For example,</p>
<p>If you dream about waking up earlier every day, set a clear goal
about it - "In 3 weeks, I will wake up at 6:30am every day."</p>
<p>Then, break this goal down into smaller tasks:</p>
<ul>
<li>From tonight onwards, I will go to sleep before 11:00pm.</li>
<li>Set alarm to remind me to go to sleep</li>
<li>Schedule earlier friends gathering so I can go to sleep early</li>
<li>For the 1st week, I will wake up at 7:30am even for non-working days</li>
<li>Go jogging or swimming in the morning for weekends</li>
</ul>
<p>... and the task list goes on.</p>
<p>Also, you should reflect on your progress while you work. Track your
input and output for each task, so you can easily tell which tasks are
only a waste of time with little importance.6 This can help you focus
on doing the things that bring positive results, which will improve
productivity.</p>
<h3 id="Type+3%3A+The+Avoider" name="Type+3%3A+The+Avoider">Type 3: The Avoider</h3>
<p>The worrier are scared to take on tasks that they think they can't
manage. They would rather put off work than be judged by others when
they end up making mistakes.</p>
<h4 id="Advice+for+the+Avoider" name="Advice+for+the+Avoider">Advice for the Avoider</h4>
<p>I know checking emails seems tempting, but don't make answering
emails the first thing on your to-do list. More often than not,
emails are unimportant. But they steal your time and mental energy
before you even notice.</p>
<p>Instead, focus on the worst first. Spend your morning working on
what you find the most challenging. This will give you a sense of
achievement, and helps you build momentum for a productive day ahead.</p>
<p>Try to break down your tasks into smaller sub-tasks. Understand how
much time and energy is really needed for a given task. Make realistic
calculations.</p>
<p>For example,</p>
<p>A 2000-word report does seem to take a lot of time and effort, it
does seem scary to just start working on it. But is there anyway to
break this down into smaller pieces so it'll seem less scary?
What about this:</p>
<ul>
<li>Introduction: around 100 words (15 min)</li>
<li>Table of content (5 min)</li>
<li>Report on the financial status: a chart with 100 supporting text (20 min)</li>
<li>Case study: 3 cases based on the new business model with around 400 words each (around 40 min each)</li>
<li>Conclusion: around 800 words (30 min)</li>
</ul>
<p>Does it look a lot more easier now?</p>
<h3 id="Type+4%3A+The+Crisis-maker" name="Type+4%3A+The+Crisis-maker">Type 4: The Crisis-maker</h3>
<p>Now the crisis-maker deliberately pushes back work until the last
minute. They find deadlines (the crises) exciting, and believe that
they work best when being forced to rush it.</p>
<h4 id="Advice+for+the+Crisis-maker" name="Advice+for+the+Crisis-maker">Advice for the Crisis-maker</h4>
<p>Being forced to rush the work will perform better is just an illusion
because it actually leaves you no room for reviewing the work to make
it better afterwards.</p>
<p>If you always leave work until the last minute, try using the
Pomodoro technique. Literally the "tomato technique" developed by
Italian entrepreneur Francesco Cirillo.</p>
<p>It focuses on working in short, intensely focused bursts, and then
giving yourself a brief break to recover and start over.</p>
<p>For example,</p>
<p>Use a timer and divide your complex work into small manageable
sessions. In between the small sessions, give yourself a break
to recover.</p>
<p>While giving your brain a regular break can highly boost your
performance by recharging your brain's energy;10 having completed
the tasks earlier allows you to have plenty of time to go through
your work again to make it even better.</p>
<h3 id="Type+5%3A+The+Busy+Procrastinator" name="Type+5%3A+The+Busy+Procrastinator">Type 5: The Busy Procrastinator</h3>
<p>This type of procrastinators are the fussy ones. They have trouble
prioritizing tasks because they either have too many of them or
refuse to work on what they see as unworthy of their effort.
They don't know how to choose the task that's best for them and
simply postpone making any decisions.</p>
<h4 id="Advice+for+the+Busy+Procrastinator" name="Advice+for+the+Busy+Procrastinator">Advice for the Busy Procrastinator</h4>
<p>You have to get your priorities straight. Important tasks should
take priority over urgent ones because "urgent" doesn't always mean
important. You only have so much time and energy, and you don't want
to waste that on things that don' matter.</p>
<p>Identify the purpose of your task and the expected outcome. Important
tasks are the ones that add value in the long run.</p>
<p>Replying an email that's written "please get back to me asap" seems
to be urgent, but before you reply that email, think about how
important it is compared to other tasks.</p>
<p>For example,</p>
<p>Imagine the email is sent by a client asking about the progress of
a project and she wants you to reply her as soon as possible; at the
same time you have another task about fixing the logistics problem
that is affecting all the projects on hand. Which one should you
handle first?</p>
<p>The time cost for replying an email is as low as just around 5
minutes but the benefit is also very low because you're just
satisfying one client request. Fixing the logistic problem probably
takes a lot more time but it's also a lot more worth it because by
fixing the problem, you're saving all the projects on hands,
benefiting the whole company.</p>
<h3 id="Be+smart+about+every+small+choice+you+make+because..." name="Be+smart+about+every+small+choice+you+make+because...">Be smart about every small choice you make because...</h3>
<p>You may notice most of the characteristics of procrastinators have
to do with their mindset. They keep delaying work because of some sort
of fear. This is exactly why tweaking our attitude towards work can
help us stop procrastinating and become more productive.</p>
<p>Changing our mindset may seem a lot of work. But by doing the
smallest things every day, you're getting used to the way you handle
works - from setting goals, to breaking down tasks, to evaluating
each task's values.</p>
<p>Source <a href="http://www.lifehack.org/565818/why-procrastinate-and-how-stop-procrastination">lifehack.org</a></p>
manage busyness
urn:uuid:69cb2ceb-616f-9f22-e2dc-831f5d016a54
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Former United States President Dwight Eisenhower was responsible for
putting together one of the most important yet fundamentally simple
to understand concepts in time management. Eisenhower's
Urgent/Important Principle is a tool to help decipher what tasks need
to be addressed more immediately than others. Anyone who uses the
principle will be better able to organize and orchestrate their
daily tasks. This skill is especially imperative for busy people
who find themselves working too hard and still not getting everything
done.</p>
<p>Eisenhower's Urgent/Important Principle places tasks into four
categories:</p>
<ul>
<li>Important and Urgent</li>
<li>Important but Not Urgent</li>
<li>Not Important but Urgent</li>
<li>Not Important and Not Urgent</li>
</ul>
<p>These four categories are used to label and organize which tasks need
to be addressed first and which ones can be approached last. By
asserting something's importance and its urgency, we are better able
to identify what comes first:</p>
<p><img src="/images/2017/covey-time-management-grid.png" alt="quadrants" /></p>
<p>What these quadrants reveal is that identifying which tasks are either
important or urgent boils down to time management and what makes us
most efficient. For example, President Obama' former campaign manager
said in an article by WebMD that Obama valued his time to exercise and
that it helped fuel him for the rest of his day. According to Obama,
"The rest of my time will be more productive if you give me my workout
time." The article goes on in detail about his routine and how he
values its importance.</p>
<p>James Clear, a behavioral psychology writer, noted in a blog post
that "too often, we use productivity, time management, and optimization
as an excuse to avoid the really difficult question: 'Do I actually
need to be doing this?' It is much easier to remain busy and tell
yourself that you just need to be a little more efficient or to 'work
a little later tonight' than to endure the pain of eliminating a task
that you are comfortable with doing, but that isn' the highest and
best use of your time."</p>
<p>Let' take a deeper look at each quadrant, what it means, and how we
should approach all of our tasks with either urgency or importance
(or both).</p>
<h3 id="Urgent+And+Important" name="Urgent+And+Important">Urgent And Important</h3>
<p>For Urgent/Important tasks, they can arise unexpectedly or may have
been left for the last minute. These tasks need to be managed ahead
of time. Make plans to address these tasks so that they do not become
stressful activities when it comes close to deadlines. It's also a
good idea to leave some wiggle room in your daily schedule just in
case unexpected tasks come about.</p>
<p>Assess your deadlines. Are you moving at an appropriate pace to meet
that deadline?</p>
<p>Emergencies happen. Whether they are unexpected meetings or sickness
or injuries, they can't be put off until later.</p>
<p>This will force you to reconsider your task list and how much time
you have to apply to each quadrant.</p>
<h3 id="Important+But+Not+Urgent" name="Important+But+Not+Urgent">Important But Not Urgent</h3>
<p>Not Urgent/Important tasks are integral to personal growth, building
relationships, and accomplishing long-term professional goals. If these
tasks are given the proper amount of time, they will not become urgent.
This will prevent unexpected and last-minute tasks from unexpectedly
cluttering up your time later on, keeping stress and frustration at
bay. You'll be able to complete work efficiently and effectively.</p>
<p>Exercise is an example of this. Personal growth through exercise is
not an overnight progress. Training for a run or any other sort of
competition doesn't begin just days before. Plan your goals ahead of
time, but leave room for urgent, unexpected tasks.</p>
<p>Maintaining your relationships is also important. Keep up with
friends and family and partners, but be mindful of how much time
you're alotting here. There is such a thing as putting too much
time into relationships. Your goals are important, too. If you keep
putting them off, they'll soon become urgent and you'll become
stressed. This may affect your relationships in the long run.</p>
<h3 id="Urgent+But+Not+Important" name="Urgent+But+Not+Important">Urgent But Not Important</h3>
<p>Urgent/Not Important tasks are cumbersome and get in the way of
your goals. Responding to phone calls or emails that are not
pertinent to your goals or attending meetings with people who don't
bring any value to completing your activities can be wasted time.
Avoid these if possible and delegate the activities if you can.
Something to keep in mind: you're saying yes to the person, but no to
the task.</p>
<p>If someone or something requires that you do things for them
frequently, then it might be best to arrange time for them in one
larger block of time. This will allow you to focus your energy and
time on multiple things.</p>
<p>Respond to time-sensitive correspondence as needed. Don't wait until
after a deadline to inform someone when that deadline is:</p>
<p>You: "Hey, the class will be starting at noon today."</p>
<p>Colleague: "Really? Because it' already 2 P.M.!"</p>
<h3 id="Not+Urgent+And+Not+Important" name="Not+Urgent+And+Not+Important">Not Urgent And Not Important</h3>
<p>Not Urgent/Not Important tasks should also be avoided. Spending
time on Facebook or Twitter, watching TV, and shopping (when it's
not important to completing your tasks to have the things you're
shopping for) can significantly drain your time. Limit these tasks
as much as possible. It's not always going to be easy saying no to
these mostly leisure activities, but it is important to remain
mindful of how much of that time you're using here.</p>
<p>Yes, everyone is talking about the new show on Netflix. They watched
it this past weekend and are already posting memes and gifs on
Facebook. This doesn't mean you have to do the same.</p>
<p>Complete tasks first and then assess if you have time to participate
in leisure. Otherwise, you're procrastinating, and that affects all
the other quadrants.</p>
<h3 id="In+Conclusion" name="In+Conclusion">In Conclusion</h3>
<p>Eisenhower's Principles can be vital in developing skills to
effectively and consistently complete tasks, delegate properly,
and work efficiently. Take time to look over your tasks to determine
which quadrant they belong.</p>
<ul>
<li>Is there a deadline? If yes, then it is important.</li>
<li>Is the deadline soon? If yes, then it is urgent.</li>
<li>Is the task necessary to completing the other tasks? If yes, then it is important.</li>
<li>Can I delegate the task to someone else? If yes, then it is not important.</li>
<li>What does it have to do with your personal growth?</li>
<li>What does it have to do with your professional growth?</li>
</ul>
<p>Ask yourself these questions when you need to determine a task's
importance and urgency. Make a quadrant table of your own somewhere
to help you visualize all your tasks. This is an excellent exercise
for time management, and it could be the foundation of healthy work
habits that stick around for a long time.</p>
<p>Source <a href="http://www.lifehack.org/463821/if-youre-busy-but-still-find-your-hard-work-doesnt-pay-off-you-probably-lack-this-important-skill">lifehack.org</a></p>
Kivy
urn:uuid:4282503e-c895-4b78-3ee6-74cfce1ee694
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Today I want to briefly write about <a href="https://kivy.org/">kivy</a>. <a href="https://kivy.org/">kivy</a> is an
<a href="https://www.python.org/">Python</a> library intended for developing Mobile Apps. It is a
cross-platform library that runs on <a href="https://www.android.com/">Android</a>, <a href="http://www.apple.com/ios/">iOS</a>,
<a href="https://www.cs.helsinki.fi/linux/">Linux</a>, <a href="https://www.apple.com/macos/">OS X</a> and <a href="https://www.microsoft.com/en-us/windows">Windows</a>. It is licensed under
the <a href="https://opensource.org/licenses/MIT">MIT</a>, so it is free and open source.</p>
<p><a href="https://kivy.org/">Kivy</a> is the main framework developed by the <a href="https://kivy.org/#aboutus">Kivy organisation</a>, alongside with <a href="https://github.com/kivy/python-for-android">Python for Android</a>,
<a href="https://github.com/kivy/kivy-ios">Kivy iOS</a> and other libraries meant to be used in all platforms. It is compatible with Python2 and Python3,
as well as having support for the Raspberry Pi.</p>
<p>The framework contains all the elements for building an application such as:</p>
<ul>
<li>extensive input support for mouse, keyboard, TUIO, and OS-specific multitouch events,</li>
<li>a graphic library using only OpenGL ES 2, and based on Vertex Buffer Object and shaders,</li>
<li>a wide range of Widgets that support multitouch,</li>
<li>an intermediate language <a href="https://kivy.org/docs/guide/lang.html">kv</a> used to easily design custom Widgets.</li>
</ul>
Mail Archiver ideas
urn:uuid:53e64392-3a5f-e6db-e006-bef6b1b112fc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>We use it for receiving junk e-mails (i.e. for those times where we need an e-mail address for sign-up to a service).</p>
<p>E-mails are of the form:</p>
<p>ar-XXXX@0ink.net</p>
<h3 id="TODO%3A" name="TODO%3A">TODO:</h3>
<p>Extend postie:</p>
<ul>
<li><a href="http://postieplugin.com/extending/">http://postieplugin.com/extending/</a></li>
<li><a href="http://postieplugin.com/postie_post_before/">http://postieplugin.com/postie_post_before/</a></li>
</ul>
<p>Before posting we insert all the header information into a table.</p>
<p>Automatically delete postings. If we want to keep the post we change its category.</p>
<ul>
<li><a href="https://wordpress.org/plugins/auto-prune-posts/">https://wordpress.org/plugins/auto-prune-posts/</a></li>
</ul>
<p>MAYBE: Markdownify it...</p>
<ul>
<li><a href="https://github.com/Elephant418/Markdownify">https://github.com/Elephant418/Markdownify</a></li>
</ul>
<p>EXTRA ARCHIVER:</p>
<ul>
<li>Check how gmail keeps folders (<a href="http://php.net/manual/en/function.imap-open.php">http://php.net/manual/en/function.imap-open.php</a>)
And then see if we can hack it into Postie.</li>
<li><a href="https://www.electrictoolbox.com/open-mailbox-other-than-inbox-php-imap/">https://www.electrictoolbox.com/open-mailbox-other-than-inbox-php-imap/</a>
Maybe we do by e-mail@something/folder</li>
</ul>
<p><a href="http://postieplugin.com/forcing-an-email-check/">http://postieplugin.com/forcing-an-email-check/</a></p>
<p>Would be the call back to MailGun</p>
<ul>
<li>Check if we can add MailGun to Postie
<ul>
<li>save transient when we start</li>
<li>Use event API to get message lists (since last transient)</li>
<li>Use message API to retrieve and delete messages</li>
</ul></li>
</ul>
<ul>
<li>E-mail archiving alternatives
<ul>
<li><a href="http://lurker.sourceforge.net/">lurker </a></li>
<li><a href="https://www.enkive.org/">Enkive</a></li>
<li><a href="http://terminal.se/code.html">mboxpurge.pl</a></li>
<li><a href="http://archivemail.sourceforge.net/">archivemail</a></li>
<li><a href="https://sourceforge.net/projects/openmailarchiva/">Open Mail Archiva</a></li>
<li><a href="http://git.io/gyb">GYB</a></li>
<li><a href="http://gmvault.org">gmvault</a></li>
<li><a href="http://www.mailpiler.org/wiki/start">Mail Piler</a></li>
</ul></li>
</ul>
Fixed drive letters for removable USB sticks
urn:uuid:54bffae1-fe8d-a569-06b9-1780cc5de3e1
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>If you use multiple USB drives, you've probably noticed that the drive letter can be
different each time you plug one in. If you'd like to assign a static letter to a drive that's
the same every time you plug it in, read on.</p>
<p>Windows assigns drive letters to whatever type of drive is available. This can be annoying
especially if you use backup tools or portable apps that prefer to have the same drive letter
every time.</p>
<p>To work with drive letters, you'll use the Disk Management tool built into Windows. In Windows
7, 8, or 10, click Start, type<code>create and format</code>, and then click <code>Create and format hard disk partitions.</code> Don't worry. You're not going to be formatting or creating anything. That's just
the Start menu entry for the Disk Management tool. This procedure works the same in pretty much
any version of Windows (though in Windows XP and Vista, you'd need to launch Disk Management
through the Administrative Tools item in the Control Panel).</p>
<p><img src="/images/2017/sud_1.png" alt="sud_1" /></p>
<p>Windows will scan and then display all the drives connected to your PC in the Disk Management
window. Right-click the USB drive to which you want to assign a persistent drive letter and
then click <code>Change Drive Letter and Paths.</code></p>
<p><img src="/images/2017/sud_2.png" alt="sud_2" /></p>
<p>The <code>Change Drive Letter and Paths</code> window the selected drive's current drive letter. To
change the drive letter, click <code>Change.</code></p>
<p><img src="/images/2017/sud_3.png" alt="sud_3" /></p>
<p>In the <code>Change Drive Letter or Path</code> window that opens, make sure the <code>Assign the following drive letter</code> option is selected and then use the drop-down menu to select a new drive letter.
When you're done, click <code>OK.</code></p>
<p>NOTE: We suggest picking a drive letter between M and Z, because earlier drive letters may
still get assigned to drives that don't always show up in File Explorer-like optical and
removable card drives. M through Z are almost never used on most Windows systems.</p>
<p><img src="/images/2017/sud_4.png" alt="sud_4" /></p>
<p>Windows will display a warning letting you know that some apps might rely on drive letters
to run properly. For the most part, you won't have to worry about this. But if you do have
any apps in which you've specified another drive letter for this drive, you may need to
change them. Click <code>Yes</code> to continue.</p>
<p><img src="/images/2017/sud_5.png" alt="sud_5" /></p>
<p>Back in the main Disk Management window, you should see the new drive letter assigned to the
drive. You can now close the Disk Management window.</p>
<p><img src="/images/2017/sud_6.png" alt="sud_6" /></p>
<p>From now on, when you disconnect and reconnect the drive, that new drive letter should persist.
You can also now use fixed paths for that drive in apps <code>such as back up apps</code> that may require them.</p>
<p>Source: <a href="http://www.howtogeek.com/96298/assign-a-static-drive-letter-to-a-usb-drive-in-windows-7/">howtogeek</a></p>
Portable Console
urn:uuid:edc29a39-9e42-83ab-7162-c81b0e88f066
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>portable console</p>
<p>Set scrolling region:</p>
<pre><code class="language-bash">printf "\033[1;24r"</code></pre>
<p>Reset scrolling region:</p>
<pre><code class="language-bash">printf "\033[r"</code></pre>
<p>However, it is easier/better to do:</p>
<pre><code class="language-bash">stty rows 24 cols 80</code></pre>
Xnest
urn:uuid:6e6a6d70-070f-8dfc-54f5-9f2094170ea8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This trick lets you run X-Windows within an X-Windows session.</p>
<p>This is kinda like running VNC. It is useful for testing scenarios.</p>
<pre><code>#!/bin/sh
Xnest :1 -name "Bla" -ac -geometry 800x600 &amp;
sleep 1
export DISPLAY=:1
exec xterm</code></pre>
CyberWorld 2017.1
urn:uuid:34b6c15c-90d3-d2f4-3855-840bc1355b63
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Development</p>
<ul>
<li>
<p><a href="https://gist.github.com/qertis/acd71e14db4168832f3b67c75182af04/">travis cordova build</a></p>
</li>
<li>
<p><a href="https://github.com/svenlaater/travis-ci-ionic-yml">travis ionic build</a></p>
</li>
<li>
<p>owx</p>
<ul>
<li>common
<ul>
<li>muninlite (can it support plugins?)</li>
<li>flock, pwgen, ifstat</li>
</ul></li>
<li>ow1
<ul>
<li>diags&tools: usbutils, netstat-nat</li>
<li>sniffer: tcpdump[-mini] 317K/617K, libpcap 191K</li>
</ul></li>
<li>owX
<ul>
<li>FW/NAT</li>
<li>DNSMASQ: DHCP + DNS</li>
<li>NTP server</li>
<li>Dynamic DNS updating (mushu porker)</li>
<li>NFS</li>
<li>IPv6 tunnel</li>
<li>Provisioning server: (PXE, TFTP, NFS, HTTP, rpmgot, syslog?)</li>
<li>TLR server: HTTP, file manipulation, HTTPS?</li>
<li>USB storage</li>
</ul></li>
</ul>
</li>
<li>
<p>owx - switches</p>
</li>
<li>
<p>cn1</p>
<ul>
<li><input type="checkbox" disabled > data scrubbing</li>
<li><input type="checkbox" disabled > backups</li>
<li><input type="checkbox" disabled > boot cd mirroring</li>
<li><input type="checkbox" disabled > config backup to alvm1</li>
<li><input type="checkbox" disabled > NFS mounting installable iso images</li>
<li>alvm1 : Main file store
<ul>
<li><input type="checkbox" disabled checked> file sharing (NFS, Samba, http)</li>
<li><input type="checkbox" disabled > rsync backup target</li>
<li><input type="checkbox" disabled > undup, backup puller</li>
</ul></li>
<li><input type="checkbox" disabled > alvm2 : Backup file store
<ul>
<li>snapshot server (NFS)</li>
<li>backuper</li>
</ul></li>
<li><input type="checkbox" disabled > alvm3 : Transmission
<ul>
<li>Implemented as its own server because of the VPN</li>
</ul></li>
<li><input type="checkbox" disabled > cvm1 : Main APP server</li>
<li><input type="checkbox" disabled > alvm4 : X10 server
<ul>
<li>Implemented as its own server because VM only runs if HW is available</li>
</ul></li>
<li><input type="checkbox" disabled > alvm5: DMZ Server
<ul>
<li><input type="checkbox" disabled > reverse proxy</li>
<li><input type="checkbox" disabled > PocketMine
<ul>
<li>Muirfield</li>
<li>Niños</li>
</ul></li>
<li><input type="checkbox" disabled > asterisk</li>
</ul></li>
<li><input type="checkbox" disabled > alvm6 : Scan&Print server
<ul>
<li>Spin-off cvm1, because SELINUX exception. Shouldn't connect to DMZ, nor X10</li>
</ul></li>
</ul>
</li>
</ul>
<h3 id="DMZ+Server+Basic+Alpine+Linux+install" name="DMZ+Server+Basic+Alpine+Linux+install">DMZ Server Basic Alpine Linux install</h3>
<ul>
<li>Create dos partition on the data drive</li>
<li>mkdosfs on partition and mount</li>
<li>setup-alpine</li>
<li>apk update</li>
<li>lbu ci</li>
</ul>
<hr />
<h3 id="Reverse+Proxy" name="Reverse+Proxy">Reverse Proxy</h3>
<h4 id="install+nginx" name="install+nginx">install nginx</h4>
<ul>
<li>apk add nginx ?php-fpm?</li>
<li>configure in /etc/nginx/nginx.conf (<a href="https://wiki.alpinelinux.org/wiki/OwnCloud#Nginx">Reference</a>)</li>
<li>apk apache2-utils : for htpasswd command</li>
<li>Add a proxy command:</li>
</ul>
<pre><code> location / {
proxy_pass http://$server/;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/_htpasswd;
proxy_set_header X-Remote-User $remote_user;
proxy_pass_request_headers on;
}</code></pre>
<p>Variable Reference: <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#variables">NGINX Docs</a></p>
<hr />
<h3 id="WebServer" name="WebServer">WebServer</h3>
<ul>
<li>PHP checks headers (passed by the reverse proxy), otherwise,</li>
<li>use authd:</li>
<li>If using selinux we need to set this boolean:
<ul>
<li>setsebool -P httpd_can_network_connect on</li>
</ul></li>
<li>PHP Function on server to determine user:</li>
</ul>
<pre><code class="language-php">define('IDENT_PORT',113);
function identd_query($remote_ip,$remote_port,$local_port,$tout=3) {
$remote_ip = 'localhost';
$sock = @fsockopen($remote_ip,IDENT_PORT,$errno,$errstr,$tout);
//print_r([$sock,$errno,$errstr]);
if (!$sock) return FALSE;
@fwrite($sock,$remote_port.','.$local_port."\r\n");
$line = @fgets($sock,1000); // 1000 octets according to RFC1413
fclose($socket);
if (preg_match('/^\s*(\d+)\s*,\s*(\d+)\s*:\s*(\S+)\s*:\s*(\S+)\s*:\s*(\S+)\s*$/', $line,$mv)) {
if ($mv[1] == $remote_port && $mv[2] == $local_port &&
$mv[3] == 'USERID') {
return $mv[5];
}
}
return FALSE;
}</code></pre>
<h3 id="Web+Browser" name="Web+Browser">Web Browser</h3>
<p>For archlinux, install oidentd</p>
<ul>
<li>yum install authd</li>
<li>check firewall port</li>
<li>systemctl start authd.socket</li>
<li>Enable authd.socket</li>
<li>Add Override:
<ul>
<li>/etc/systemd/system/auth@.service.d/override.conf
<ul>
<li>[Service]</li>
<li>ExecStart=</li>
<li>ExecStart=-/usr/sbin/in.authd -t60 --xerror</li>
</ul></li>
</ul></li>
</ul>
<hr />
<p><a href="https://wiki.alpinelinux.org/wiki/LXC">https://wiki.alpinelinux.org/wiki/LXC</a>
<a href="https://wiki.alpinelinux.org/wiki/Setting_up_a_basic_vserver">https://wiki.alpinelinux.org/wiki/Setting_up_a_basic_vserver</a></p>
<hr />
<p>browser -> guac -> xinetd|vncserver|x2go-client -> x2go-server
browser -> revproxy -> guac -> xinetd|vncserver|x2go-client -> x2go-server</p>
<hr />
<p>Check what Thin client software Tiny Core Linux supports otherwise Browser with Guacamole</p>
<hr />
<p>Server script (haserl) on OW1
Show version and last update
Options: Delete Entry
Post update : Using wget</p>
<hr />
<p>Create a local pastebin (to add notes from SONY PRS-T2)
<a href="https://wiki.alpinelinux.org/wiki/Pastebin">https://wiki.alpinelinux.org/wiki/Pastebin</a></p>
<hr />
<h3 id="Configure+a+Windows+VM" name="Configure+a+Windows+VM">Configure a Windows VM</h3>
<pre><code>./mxt.sh \
vmcfg \
vm=winvm1 \
rem="win7 system" \
-serial \
viridian=1 \
boot=d \
hd=1,16G \
cdrom=3,/xendat/installers/Win7AIO.x32-x64.preact.iso</code></pre>
<h3 id="Centos+7" name="Centos+7">Centos 7</h3>
<p>Template preparation </p>
<p>Configure serial console:</p>
<ol>
<li>Modify <code>/etc/default/grub</code>
<ul>
<li>GRUB_TERMINAL_OUTPUT=serial</li>
<li>GRUB_CMDLINE_LINUX=console=ttyS0 --rhgb</li>
</ul></li>
<li>Run <code>grub2-mkconfig -o $d/grub.cfg</code> either on <code>/boot/efi/EFI</code> or <code>/boot/grub2</code></li>
</ol>
<p>Stop ssh and remove all ssh keys. </p>
<p>Modify rc.local to run something once:</p>
<ul>
<li>change hostname (if possible?)</li>
<li>remove all SSH keys (and reboot)</li>
</ul>
<p>Create a centos/xen template prep script. We pass it as a custom tar in xvdh. Another option:</p>
<ul>
<li>Use a serial port (connected to UNIX socket)</li>
<li><a href="http://xenbits.xen.org/docs/unstable/misc/channel.txt">Use a Xen PV channel</a></li>
</ul>
<p>We need to pass vm name. </p>
<p><a href="http://www.certdepot.net/rhel7-get-started-systemd/">rhel7 and systemd</a></p>
<p>Cfg script /etc/xen</p>
<ol>
<li>After block devices stanza</li>
<li>Check if tar is there</li>
<li>Append to the list</li>
</ol>
<p>Better to do <a href="http://silviud.blogspot.nl/2011/09/from-domu-read-xenstore-ec2-linode-etc.html?m=1">this</a></p>
<h3 id="Notes" name="Notes">Notes</h3>
<h4 id="munin" name="munin">munin</h4>
<ul>
<li><a href="http://munin-monitoring.org/wiki/HowToWritePlugins">plugin writing</a></li>
<li>Monitored data using <a href="http://support.citrix.com/article/CTX127896">xentop</a>
<ul>
<li>CPU is done, what about vbd I/O or network I/O</li>
</ul></li>
<li>xen wiki on <a href="http://wiki.xenproject.org/wiki?title=Special%3ASearch&search=xentop&go=Go">xentop</a></li>
</ul>
<h3 id="Serial+xen+configuration" name="Serial+xen+configuration">Serial xen configuration</h3>
<ul>
<li>serial=/dev/ttyS0<br />
[Linux only] Use host tty, e.g. ‘/dev/ttyS0’. The host serial port parameters are set according to
the emulated ones.</li>
<li>serial=unix:path[,server][,nowait]<br />
A unix domain socket is used instead of a tcp socket. The option works the
same as if you had specified -serial tcp except the unix domain socket path
is used for connections.<br />
The TCP Net Console has two modes of operation. It can send the serial
I/O to a location or wait for a connection from a location. By default the
TCP Net Console is sent to host at the port. If you use the server option
QEMU will wait for a client socket application to connect to the port before
continuing, unless the nowait option was specified. The nodelay option
disables the Nagle buffering algorithm. If host is omitted, 0.0.0.0 is assumed.
Only one TCP connection at a time is accepted. You can use telnet to connect
to the corresponding character device.</li>
</ul>
Side Load apps on Android TV
urn:uuid:47b85517-b980-df40-e6a3-53c9de25a091
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So we bough a Philips 50PFK6540. This is a 50" TV with
<a href="https://en.wikipedia.org/wiki/Ambilight">Ambi Ligh</a> and
<a href="https://en.wikipedia.org/wiki/Android_TV">Android TV</a>.</p>
<p>One of the things I wanted to do from the very start was to load my own APKs. This was not
possible until a recent (2016) update that enabled the <strong>"Install from Unknown Sources"</strong>
setting option.</p>
<p>This made it possible to side load applications. However, things are not as easy as I
initially thought. Because while installing from unknown sources was possible, you can not
do this from the built-in browser. So the procedure is as folows:</p>
<ol>
<li>Go to settings to enable <strong>Install from Unknown sources</strong>, which should be under
<code>Security &amp; Restrictions</code>.</li>
<li>Download <a href="https://play.google.com/store/apps/details?id=com.estrongs.android.pop">ES File Explorer</a>
from the Play Store. A bit of a warning: on phones and tablets, ES File Explorer isn't something
that is generally recommended as it used to be a reliable file manager that was one of the
most valuable Android apps, but recently it became riddled with ads <em>many of which are highly
intrusive) leading many users to uninstall it and websites to remove it from their </em>must have* lists.
Fortunately, the Android TV app seems to have gone largely untouched by this, so it still is
recommended it for the purpose of this tutorial.</li>
<li>Use ES File Explorer to download the APK you want to side load. There is a number of ways to do
this. I used the built-in FTP server. But you could use any method (i.e Thumb drive, Cloud
Storage, etc...)</li>
<li>Open the APK from ES File Explorer and install it.</li>
</ol>
<p><img src="/images/2017/50PFK6540_12-IMS-nl_NL.png" alt="Philips 50PFK6540" /></p>
Building Signed APKs
urn:uuid:8dc4671d-6d30-fef8-4957-051bbb572982
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Building signed APK's for Android is easy if you know what you
are doing.</p>
<p>This article goes over the preparation steps and the additional
build instructions needed to created signed APKs.</p>
<h3 id="Preparation" name="Preparation">Preparation</h3>
<p>First you need to have a <code>keystore</code>. Use this command:</p>
<pre><code class="language-bash">#!/bin/bash
keystore_file="my_key_store.keystore"
key_name="john_doe"
secret='fake_password'
name='John Doe'
dept='Engineering'
org='TLabs Inc'
place='New York'
province='NY'
country='US'
keytool -genkey -v -keystore "$keystore_file" -alias "key_name" -keyalg "RSA" -validity 10000 -storepass "$secret" -keypass "$secret" &lt;&lt;EOF
$name
$dept
$org
$place
$province
$country
yes
EOF
</code></pre>
<p>Remember the keystore file and passwords.</p>
<h3 id="Build+instructions" name="Build+instructions">Build instructions</h3>
<p>In your <code>build.gradle</code> you need the following:</p>
<pre><code class="language-javascript">
android {
signingConfigs {
release {
storeFile file("my_keystore.keystore")
storePassword "{password}"
keyAlias "Key_Alias"
keyPassword "{password}"
}
}
buildTypes {
release {
signingConfig signingConfigs.release
}
}
}
</code></pre>
Archiving DVDs and CDs
urn:uuid:b41ab038-e803-614b-0f57-7492b22fb421
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Since now I have a Android TV I put away my HTPC and with that the capability to view DVDs or
listen CDs directly.</p>
<p>So I converted my entire CD and DVD library to media files and stored in my home NAS.</p>
<p>Since we are talking hundreds of DVDs and CDs, I was using some tools.</p>
<h2 id="CD+ripping" name="CD+ripping">CD ripping</h2>
<p>For CD ripping, pretty much everything can be done with <a href="https://abcde.einval.com/wiki/">abcde</a>.
I would use the following command:</p>
<pre><code>abcde -G -k -o mp3 -x</code></pre>
<p>Options:</p>
<ul>
<li><code>-G</code> : Get album art.</li>
<li><code>-k</code> : Keep <code>wav</code> after encoding. This is not really necessary.</li>
<li><code>-o mp3</code> : Output to <code>mp3</code>.</li>
<li><code>-x</code> : Eject the CD after all tracks have been read.</li>
</ul>
<p>Afterwards I would use <a href="http://eyed3.nicfit.net/">eyeD3</a> to embed the
cover art and tweak things. (Note under <a href="http://archlinux.org">archlinux</a>,
<code>eyeD3</code> is installed from the <code>python2-eyed3</code> package).</p>
<h4 id="To+add+cover+art%3A" name="To+add+cover+art%3A">To add cover art:</h4>
<pre><code>eyeD3 --add-image="$cover_file":FRONT_COVER \*.mp3</code></pre>
<h2 id="DVD+Ripping" name="DVD+Ripping">DVD Ripping</h2>
<p>For DVD Ripping I was using a couple of homegrown scripts. These can be found on
<a href="https://github.com/alejandroliu/MediaArchiving">github</a>.</p>
<p>I started using <a href="http://vobcopy.org/download/release_notes_and_download.shtml">vobcopy</a>,
but if I were to do this again I would use <a href="http://dvdbackup.sourceforge.net/">dvdbackup</a>
with the <code>-M</code> option. <code>vobcopy</code> is quite old and probably is orphaned by now.</p>
<h3 id="Scripts+for+archiving+media" name="Scripts+for+archiving+media">Scripts for archiving media</h3>
<p>Scripts:</p>
<ul>
<li>archive-dvd : Create an iso image from a DVD.</li>
<li>alltitles : Extract titles/chapters from a DVD.</li>
<li>auto.sh : Used to transcode titles/chapters extracted by <code>alltitles</code></li>
</ul>
<h4 id="archive-dvd" name="archive-dvd">archive-dvd</h4>
<p>This script uses <code>vobcopy</code> and <code>mkisofs</code> to create an ISO file.
Just run the script and insert a DVD, you will get an ISO file
in return.</p>
<h4 id="alltitles" name="alltitles">alltitles</h4>
<p>Usage:</p>
<pre><code>[option_vars] sh alltitles [chapter]</code></pre>
<p>Option vars:</p>
<ul>
<li>drive=[device-path] defaults to /dev/sr0</li>
<li>titles="01 02 03 ..." defaults to all titles in DVD (as listed by
lsdvd)
You can also specify titles as:
<code>title="01,1-4 01,5-8"</code>
This will create two files, one with track 1, chaptes one trough
four (inclusive)
and another one with track 1, chapters five through eigth (inclusive)</li>
</ul>
<p>Command options:</p>
<p>chapter: Leave blank for all chapters, otherwise:</p>
<pre><code>-chapter [$start-$end]</code></pre>
<p>Will dump starting from $start until $end. (or end)
If you only want to extract chapter 7 by itself, use -chapter 7-7</p>
<h4 id="auto.sh" name="auto.sh">auto.sh</h4>
<p>Usage:</p>
<pre><code>sh $0 [options]</code></pre>
<p>vob files must be the ones extracted from <code>alltitles</code>.</p>
<p>Options:</p>
<ul>
<li>--preview|-p : Only encode 30 seconds from 4 minutes in</li>
<li>--copy|-c : Do only copy</li>
<li>--interlace|-i : Force interlace filter</li>
<li>--no-interlace|+i : Disables interlace filter</li>
</ul>
<h3 id="Dependancies" name="Dependancies">Dependancies</h3>
<ul>
<li>libdvdcss (or equivalent).
This is used by the dvdread library to decode CSS protected DVDs.</li>
<li>libdvdread
This is used to read DVD by a number of binaries.</li>
<li><a href="http://vobcopy.org/download/release_notes_and_download.shtml">vobcopy</a>
Used by <code>archive-dvd</code> to extract the data that will be used to create
the ISO image. Uses <code>libdvdread</code>.</li>
<li>udisks or udisks2
Used by the scripts to detect when a CD/DVD is inserted.</li>
<li>cdrkit
Used to create the iso images by <code>archive-dvd</code>.</li>
<li>lsdvd
Used by <code>alltitles.sh</code> to get track information.</li>
<li>mplayer
Used by <code>alltitles.sh</code> to extract DVD titles/chapters.</li>
<li>ffmpeg
Used by <code>alltitles.sh</code> to encode video.</li>
</ul>
<h3 id="Some+useful+commands" name="Some+useful+commands">Some useful commands</h3>
<p>Using <code>mplayer</code> to play extract:</p>
<pre><code>mplayer -dvd-device /dev/sr0 dvd://$title -chapter $chapter-$chapter -dumpstream -dumpfile ~/$title.VOB</code></pre>
Writing Safe Shell scripts
urn:uuid:6ac46b96-de12-43e9-b696-c5a11ab76d7c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Writing shell scripts leaves a lot of room to make mistakes, in ways that will cause
your scripts to break on certain input, or (if some input is untrusted) open up security
vulnerabilities. Here are some tips on how to make your shell scripts safer.</p>
<h2 id="Don%27t" name="Don%27t">Don't</h2>
<p>The simplest step is to avoid using shell at all. Many higher-level languages are both
easier to write the code in in the first place, and avoid some of the issues that shell
has. For example, Python will automatically error out if you try to read from an
uninitialized variable (though not if you try to write to one), or if some function call
you make produces an error.</p>
<p>One of shell's chief advantages is that it's easy to call out to the huge variety of
command-line utilities available. Much of that functionality will be available through
libraries in Python or other languages. For the handful of things that aren't, you can
still call external programs. In Python, the
<a href="https://docs.python.org/2/library/subprocess.html">subprocess</a>
module is very useful for this. You should try to avoid passing <code>shell=True</code> to <code>subprocess</code>
(or using <code>os.system</code> or similar functions at all), since that will run a shell, exposing
you to many of the same issues as plain shell has. It also has two big advantages over
shell: it's a lot easier to avoid
<a href="http://www.gnu.org/software/bash/manual/html_node/Word-Splitting.html">word-splitting</a>
or similar issues, and since calls to <code>subprocess</code> will tend to be relatively uncommon,
it's easy to scrutinize them especially hard. When using <code>subprocess</code> or similar tools,
you should still be aware of the suggestions in "Passing filenames or other positional
arguments to commands" below.</p>
<h2 id="Shell+settings" name="Shell+settings">Shell settings</h2>
<p>POSIX sh and especially bash have a number of settings that can help write safe shell
scripts.</p>
<p>I recommend the following in bash scripts:</p>
<pre><code class="language-bash">set -euf -o pipefail</code></pre>
<p>In dash, <code>set -o</code> doesn't exist, so use only <code>set -euf</code>.</p>
<p>What do those do?</p>
<p><a href="http://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html">set -e</a></p>
<p>If a command fails, <code>set -e</code> will make the whole script exit, instead of just resuming
on the next line. If you have commands that can fail without it being an issue, you can
append <code>|| true</code> or <code>|| :</code> to suppress this behavior - for example <code>set -e</code> followed by
<code>false || :</code> will not cause your script to terminate.</p>
<p><a href="http://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html">set -u</a></p>
<p>Treat unset variables as an error, and immediately exit.</p>
<p><a href="http://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html">set -f</a></p>
<p>Disable filename expansion (globbing) upon seeing <code>*</code>, <code>?</code>, etc..</p>
<p>If your script depends on globbing, you obviously shouldn't set this. Instead, you may find
<a href="http://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html">shopt -s failglob</a>
useful, which causes globs that don't get expanded to cause errors, rather than getting
passed to the command with the <code>*</code> intact.</p>
<p><a href="http://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html">set -o pipefail</a></p>
<p><code>set -o pipefail</code> causes a pipeline (for example, <code>curl -s http://sipb.mit.edu/ | grep foo</code>) to produce a failure return code if any command errors. Normally, pipelines only return a failure if the last command errors. In combination with <code>set -e</code>, this will make your script exit if any command in a pipeline errors.</p>
<h2 id="Quote+liberally" name="Quote+liberally">Quote liberally</h2>
<p>Whenever you pass a variable to a command, you should probably quote it. Otherwise, the shell
will perform
<a href="http://www.gnu.org/software/bash/manual/html_node/Word-Splitting.html">word-splitting</a> and
<a href="http://www.gnu.org/software/bash/manual/html_node/Filename-Expansion.html">globbing</a>,
which is likely not what you want.</p>
<p>For example, consider the following:</p>
<pre><code class="language-bash">alex@kronborg tmp [15:23] $ dir="foo bar"
alex@kronborg tmp [15:23] $ ls $dir
ls: cannot access foo: No such file or directory
ls: cannot access bar: No such file or directory
alex@kronborg tmp [15:23] $ cd "$dir"
alex@kronborg foo bar [15:25] $ file=*.txt
alex@kronborg foo bar [15:26] $ echo $file
bar.txt foo.txt
alex@kronborg foo bar [15:26] $ echo "$file"
*.txt</code></pre>
<p>Depending on what you are doing in your script, it is likely that the word-splitting and
globbing shown above are not what you expected to have happen. By using <code>"$foo"</code> to access
the contents of the <code>foo</code> variable instead of just <code>$foo</code>, this problem does not arise.</p>
<p>When writing a wrapper script, you may wish pass along all the arguments your script
received. Do that with:</p>
<pre><code class="language-bash">wrapped-command "$@"</code></pre>
<p>See
<a href="http://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html">"Special Parameters" in the bash manual</a>
for details on the distinction between <code>$*</code>, <code>$@</code>, and <code>"$@"</code> - the first and second are
rarely what you want in a safe shell script.</p>
<h2 id="Passing+filenames+or+other+positional+arguments+to+commands" name="Passing+filenames+or+other+positional+arguments+to+commands">Passing filenames or other positional arguments to commands</h2>
<p>If you get filenames from the user or from shell globbing, or any other kind of
positional arguments, you should be aware that those could start with a <code>"-"</code>. Even if you
quote correctly, this may still act differently from what you intended. For example,
consider a script that allows somebody to run commands as <code>nobody</code> (exposed over <code>remctl</code>,
perhaps), consisting of just <code>sudo -u nobody "$@"</code>. The quoting is fine, but if a user
passes <code>-u root reboot</code>, <code>sudo</code> will catch the second <code>-u</code> and run it as <code>root</code>.</p>
<p>Fixing this depends on what command you're running.</p>
<p>For many commands, however, <code>--</code> is accepted to indicate that any options are done,
and future arguments should be parsed as positional parameters - even if they look like
options. In the <code>sudo</code> example above, <code>sudo -u nobody -- "$@"</code> would avoid this attack
(though obviously specifying in the <code>sudo</code> configuration that commands can only be run
as <code>nobody</code> is also a good idea).</p>
<p>Another approach is to prefix each filename with <code>./</code>, if the filenames are expected to be in the current directory.</p>
<h2 id="Temporary+files" name="Temporary+files">Temporary files</h2>
<p>A common convention to create temporary file names is to use <code>something.$$</code>. This is not
safe. It is better to use <code>mktemp</code>.</p>
<h2 id="Other+resources" name="Other+resources">Other resources</h2>
<p>Google has a <a href="https://google.github.io/styleguide/shell.xml">Shell Style Guide</a>.
As the name suggests, it primarily focuses on good style, but some items are
safety/security-relevant.</p>
<h2 id="Conclusion" name="Conclusion">Conclusion</h2>
<p>When possible, instead of writing a "safe" shell script, <strong>use a higher-level language
like Python</strong>. If you can't do that, the shell has several options that you can enable that
will reduce your chances of having bugs, and you should be sure to quote liberally.</p>
<p>Source <a href="https://sipb.mit.edu/doc/safe-shell/">Writing Safe Shell</a>.</p>
editor to replace emacs
urn:uuid:542f86f5-e4b7-1aed-8d56-4f16385b2227
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>At the end, I switched to <a href="http://www.geany.org/">geany</a></p>
<h2 id="GUI" name="GUI">GUI</h2>
<ul>
<li><a href="https://foicica.com/textadept/">TextAdept</a></li>
<li><a href="http://bluefish.openoffice.nl/index.html">Bluefish Editor</a></li>
<li><a href="http://editra.org/">editra</a></li>
<li><a href="https://wiki.gnome.org/Apps/Gedit">gedit</a></li>
</ul>
<h2 id="Console" name="Console">Console</h2>
<ul>
<li><a href="http://www.jbox.dk/sanos/editor.htm">sanos editor</a></li>
<li><a href="https://github.com/lanurmi/efte">eFTE</a></li>
<li><a href="http://os.ghalkes.nl/tilde/">Tilde</a></li>
</ul>
<h2 id="TCL%2FTK" name="TCL%2FTK">TCL/TK</h2>
<ul>
<li><a href="http://tke.sourceforge.net/">TKE</a></li>
<li><a href="http://mooedit.sourceforge.net/">moodit</a></li>
<li><a href="https://sites.google.com/site/msedit/home">msedit</a></li>
</ul>
<h2 id="Windows+only" name="Windows+only">Windows only</h2>
<ul>
<li>
<p><a href="https://notepad-plus-plus.org/">nodepad++</a></p>
</li>
<li>
<p>Crimson or Emerald Editors</p>
</li>
<li>
<p>Macros</p>
</li>
<li>
<p>Split Views</p>
</li>
<li>
<p>Interactive search</p>
</li>
<li>
<p>File Browser?</p>
</li>
<li>
<p>Smart indent</p>
</li>
<li>
<p>Parenthesis matching</p>
</li>
<li>
<p>Syntax: PHP, Markdown, C, Java, JavaScript, HTML, C++</p>
</li>
<li>
<p>UTF8</p>
</li>
</ul>
<p>Key Bindings: <a href="http://zzyxx.wikidot.com/key-bindings">bindings</a></p>
<p>Other CUA stuff: <a href="https://ergoemacs.github.io/cua-conflict.html">ergoemacs</a></p>
<p>Others:</p>
<ul>
<li>scite <a href="http://www.scintilla.org/SciTEDownload.html">Download</a>
It now has a single file exe for Windows.</li>
<li>editra</li>
<li>notepadqq</li>
<li>Geany</li>
<li><a href="http://www.scintilla.org/SciTE.html">Scintilla</a>
<ul>
<li>curses based: <a href="http://foicica.com/scinterm/">scinterm</a><br />
includes <em>jinx</em> which is an example for it.</li>
<li>SciTE - the default for Win and Linux.</li>
<li><a href="http://www.scintilla.org/ScintillaRelated.html">Others</a></li>
</ul></li>
<li><a href="http://tke.sourceforge.net/index.html">http://tke.sourceforge.net/index.html</a>
<ul>
<li>TCL based. Can we use cdk? Can it be use in linux and windows?</li>
</ul></li>
<li><a href="https://github.com/tihirvon/dex">dex</a></li>
</ul>
<h2 id="Notes" name="Notes">Notes</h2>
<ul>
<li>GUI and TUI, Linux and Windows</li>
<li>Modeless</li>
<li>Syntax highlighting</li>
<li>"Compact"?</li>
<li>Key recording macros</li>
<li>Split windows</li>
</ul>
<h2 id="Emacs+tips" name="Emacs+tips">Emacs tips</h2>
<ul>
<li><a href="http://ergoemacs.org/emacs/emacs_make_modern.html">make emacs modern</a></li>
<li><a href="http://superuser.com/questions/122119/locate-all-emacs-autosaves-and-backups-in-one-folder">single folder autosaves</a></li>
<li><a href="http://emacsredux.com/blog/2013/05/09/keep-backup-and-auto-save-files-out-of-the-way/">backups out of the way</a></li>
<li><a href="http://xenon.stanford.edu/~manku/emacs.html">emacs tips</a></li>
</ul>
<h2 id="Mote+ideas%3A" name="Mote+ideas%3A">Mote ideas:</h2>
<ul>
<li>
<p><a href="http://www.emacswiki.org/emacs/LinkdMode">LinkdMode</a> Paired with "deft"?</p>
</li>
<li>
<p>iMenu: <code>M-x imenu</code> or:<br />
<code>(add-hook 'c-mode-hook 'imenu-add-menubar-index)</code><br />
Start typing or use TAB completion to find function defintions.
See <a href="http://www.emacswiki.org/cgi-bin/wiki/ImenuMode">imenuMode</a></p>
</li>
<li>
<p><a href="http://www.emacswiki.org/emacs/PredictiveMode">Predictive Mode</a></p>
</li>
<li>
<p>Record, play, re-play:<br />
<code>(global-set-key [f10] 'start-kbd-macro)</code><br />
<code>(global-set-key [f11] 'end-kbd-macro)</code><br />
<code>(global-set-key [f12] 'call-last-kbd-macro)</code></p>
</li>
<li>
<p>Selective display:<br />
<code>M-1 C-x $</code> to activate<br />
<code>C-x $</code> to go back<br />
Or create shortcuts:</p>
<p>(defun jao-toggle-selective-display ()
(interactive)
(set-selective-display (if selective-display nil 1)))
(global-set-key [f1] 'jao-toggle-selective-display)</p>
</li>
<li>
<ul>
<li>
<ul>
<li>
</li>
</ul>
<p>(setq cua-enable-cua-keys nil)
(setq cua-highlight-region-shift-only t) ;; no transient mark mode
(setq cua-toggle-set-mark nil) ;; original set-mark behavior, i.e. no transient-mark-mode
(cua-mode)</p>
</li>
</ul>
</li>
</ul>
MariaDB Quickest Quick start
urn:uuid:9958b877-8083-6a76-3b29-86d1259b1fef
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article outlines the bare minimum to get
a MariaDB or MySQL database up and running.</p>
<p>It covers a CentOS/RHEL and an ArchLinux installs.</p>
<p>Make sure your system is up to date:</p>
<table>
<thead>
<tr>
<th>CentOS/RHEL</th>
<th>ArchLinux</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>yum update -y</code></td>
<td><code>pacman -Syu</code></td>
</tr>
</tbody>
</table>
<p>Install the software:</p>
<table>
<thead>
<tr>
<th>CentOS/RHEL</th>
<th>ArchLinux</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>yum install mariadb-server</code></td>
<td><code>pacman -S mariadb</code></td>
</tr>
<tr>
<td></td>
<td><code>mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql</code></td>
</tr>
</tbody>
</table>
<p>Start the database service:</p>
<pre><code> systemctl start mariadb</code></pre>
<p>Check if it is running:</p>
<pre><code> systemctl is-active mariadb.service
systemctl status mariadb</code></pre>
<p>The following step is optional but highly recommended:</p>
<pre><code> mysql_secure_installation</code></pre>
<p>Enable database to start on start-up:</p>
<pre><code> systemctl enable mariadb</code></pre>
<p>Enter SQL:</p>
<pre><code> mysql -u root -p</code></pre>
<p>Creating database:</p>
<pre><code> create database bugzilla;
FLUSH PRIVILEGES;</code></pre>
<p>Create user:</p>
<pre><code> GRANT ALL PRIVILEGES ON bugzilla.* TO 'warren'@'localhost' IDENTIFIED BY 'mypass';
GRANT ALL PRIVILEGES ON killrate.* TO 'pocketmine'@'%' IDENTIFIED BY 'mypass';
FLUSH PRIVILEGES;</code></pre>
Jaxon: Call PHP classes from JavaScript using AJAX
urn:uuid:cd7082fa-7d88-064e-b57d-ca423ab2d41e
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="https://www.jaxon-php.org/" title="Jaxon PHP library">Jaxon</a> is an open source PHP library for easily creating Ajax web applications. It allows into a web page to make direct Ajax calls to PHP classes that will in turn update its content, without reloading the entire page.</p>
<p>Jaxon implements a complete set of PHP functions to define the contents and properties of the web page. Several plugins exist to extend its functionalities and provide integration with various PHP frameworks and CMS.</p>
Building chroots with yum
urn:uuid:2cbd831d-e896-38b2-ab77-b4d1f6d7bfb3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Building CHROOTs with Yum in a single command:</p>
<pre><code>yum --releasever=7 --installroot=/chroot/jail2 -y install httpd</code></pre>
<p>Will install httpd with all its dependancies. If you are on x86_64 and want a 32 bit chroot:</p>
<pre><code>setarch i386 yum --releasever=6 --installroot=/chroot/jail32 -y install httpd</code></pre>
My WordPress plugins
urn:uuid:8454e18f-26c3-24b1-1c59-9996b5e8e663
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>For my own purposes I have written a number of WordPress plugins.</p>
<ol>
<li><a href="https://github.com/iliu-net/S3Copy">S3Copy</a> - Makes backup copies of your pictures to an S3
Compatible server. I use <a href="http://sirv.com">sirv.com</a> myself. It also mangles <img /> tags so
files are server from the S3 bucket.</li>
<li><a href="https://github.com/iliu-net/wptools">wptools</a> - A collection of WordPress related functionality.</li>
<li><a href="https://github.com/iliu-net/auto-content">auto-content</a> - A basic post from template plugin.</li>
<li><a href="https://github.com/iliu-net/simple-members-only">simple-members-only</a> - A fork of a defunct plugin.</li>
</ol>
vr starting points
urn:uuid:65aaafbe-7a77-2778-959c-5c0db67dc757
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li><a href="https://github.com/borismus/webvr-boilerplate">vr boilerplate</a></li>
<li><a href="https://virtualrealitypop.com/experimenting-with-threejs-for-virtual-reality-and-google-cardboard-86e67ba31b1c#.6xm3h9kyj">threejs</a></li>
<li><a href="https://vr.chromeexperiments.com/">vr chrome experimets</a></li>
<li><a href="https://www.sitepoint.com/filtering-reality-with-javascript-google-cardboard/">google cardboard</a></li>
<li><a href="https://www.sitepoint.com/bringing-vr-to-web-google-cardboard-three-js/">google cardboard</a></li>
<li><a href="https://opensource.com/life/16/11/build-virtual-reality-app-linux">open source linux vr app</a></li>
</ul>
<p>WebGL frameworks:</p>
<ul>
<li><a href="http://biz.turbulenz.com/developers">Native support?</a></li>
<li><a href="http://hexgl.bkcore.com/">threejs game</a></li>
</ul>
<p>Could it be converted to WebVR?</p>
<ul>
<li><a href="https://playcanvas.com/">OpenSource Engine with Proprietary on-line dev tools</a></li>
<li><a href="https://aframe.io/">Targetted for WebVR</a></li>
<li><a href="http://babylonjs.com/">TypeScript?</a></li>
</ul>
Hosting WordPress on OpenShift
urn:uuid:0a198bc3-0a1a-59a7-fd2f-2fbc728bbbc9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2016/img_0423.jpg" alt="openshift" /></p>
<p>So I finally moved my WordPress web sites to OpenShift.</p>
<p>OpenShift is a cloud based Platform-as-a-Service offering from RedHat. And while there is a learning curve I would say that so far it works great.</p>
<p>My implementation is a fully cloud based solution. Makes use of the following services:</p>
<ul>
<li>GitHub for code hosting</li>
<li>Travis-CI for continuous integration.</li>
<li>OpenShift (with autoscaling) for the database and web server</li>
<li>CloudFlare</li>
<li>Sirv.com for image hosting</li>
<li>Facebook and G+ integration</li>
<li>Google drive for cloud backups</li>
</ul>
<p>All the code can be examined on GitHub.</p>
<p>For the WordPress hosting I started with the OpenShift WordPress QuickStart and added scripts to deploy directly from Github to OpenShift via Travis.</p>
<p>Actually Travis has that functionality built in but it was a little quirky for my use cases so I wrote my own.</p>
<p>On the OpenShift side, I added code to download add-ons (plugins and themes) automatically and to deploy from the same repo to multiple apps.</p>
<p>The rationale for this is to get addons installed automatically in the
event of autoscaling while keeping the github commit log fairly tidy.</p>
<p>Also created a couple of Wordpress plugins to:</p>
<ul>
<li>Misc shortcodes and stuff</li>
<li>Automatically upload pictures to an S3 cloud storage (sivr.com in my case but this is configurable)</li>
</ul>
CSR ideas
urn:uuid:f4b1f382-871c-3b1a-eaa6-70821e6814c5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Work improvements</p>
<h2 id="NFR" name="NFR">NFR</h2>
<ul>
<li>Javascript single page application
<ul>
<li>JS GUI</li>
</ul></li>
<li>Retargettable back-end:
<ul>
<li>Local</li>
<li>Remote</li>
<li>Synchronization utility</li>
</ul></li>
<li>Multi-user (authentication?)</li>
<li>Output Excel</li>
<li>Output changes</li>
</ul>
<h2 id="FR" name="FR">FR</h2>
<ul>
<li>requirements</li>
<li>roadmap objects
<ul>
<li>release time lines</li>
<li>indicators</li>
<li>descriptive text</li>
</ul></li>
<li>Meta data
<ul>
<li>attributes (i.e owner, reviewers, etc)</li>
<li>versioning</li>
</ul></li>
<li>Milestone data</li>
<li>Detail data
<ul>
<li>release details</li>
<li>line items</li>
</ul></li>
</ul>
game lists
urn:uuid:192615cd-0071-9037-85df-af82318d53d6
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li>Cybernator</li>
<li>Darius Twin</li>
<li>Another World | Out of this World</li>
<li>Front Mission Series</li>
<li>Strike Gunner</li>
<li>The Legend ...</li>
</ul>
<p>Super Bomberman for about 59.99 (dollars) but later it was also sold alone for approximately 29.95.</p>
<p>Multitap compatible games:</p>
<ul>
<li>Barkley: Shut Up and Jam!, Bill Walsh College Football, College Slam, Elite Soccer, ESPN National Hockey Night, FIFA International Soccer, FIFA '96, Firestriker, Hammerlock Wrestling, Head On Soccer, J-League Soccer, Looney Toons B-Ball, Lord of the Rings, Madden '94, Madden '95, Madden '96, Madden '97, Micro Machines, Natsume Championship Wrestling, NBA Give 'n Go, NBA Jam, NBA Jam TE, NBA Live 95, NBA Live 96, NBA Live 97, NCAA Final Four, NCAA Football, NHL '94, NHL '95, NHL '96, NHL '97, Olympic Summer Games, Peace Keepers, Pieces, Rap Jam Vol. 1, Saturday Night Slam Masters, Secret of Mana, Slam Dunk TV Animation (Japanese), Soccer Shootout, Sporting News: Power Baseball, Sterling Silver: End 2 End, Street Hockey '95, Street Racer, Super Bomberman 1, Super Bomberman 2, Super Bomberman 3, Super Bomberman 4 (Japanese), Super Bomberman 5 (Japanese), Super Tetris 3 (Japanese), Tiny Toons Wacky Sports, Virtual Soccer (Japanese), Top Gear 3000, WWF Raw. (Puh!).</li>
<li>Peacekeepers</li>
<li>Secret of Mana</li>
<li>Firestriker</li>
<li>Bakukyuu Renpatsu!! Super B-Daman: Port 2</li>
<li>Bakutou Dochers: Port 2</li>
<li>Barkley Shut Up and Jam!: Port 2</li>
<li>Battle Cross: Port 2</li>
<li>Battle Jockey: Port 2</li>
<li>Bill Walsh College Football: Port 2</li>
<li>Capcom's Soccer Shootout: Port 2</li>
<li>College Slam: Port 2</li>
<li>Crystal Beans From Dungeon Explorer: Port 2</li>
<li>Dragon - The Bruce Lee Story: Port 2</li>
<li>Dream Basketball - Dunk and Hoop: Port 2</li>
<li>Dynamic Stadium: Port 2</li>
<li>ESPN National Hockey Night: Port 2</li>
<li>FIFA 98: Port 2</li>
<li>FIFA International Soccer: Port 2</li>
<li>FIFA Soccer 96: Port 2</li>
<li>FIFA Soccer 97: Port 2</li>
<li>Final Set: Port 2</li>
<li>Fire Striker: Port 2</li>
<li>From TV Animation Slam Dunk - SD Heat Up!!: Port 2</li>
<li>Go! Go! Dodge League: Port 2</li>
<li>Hammerlock Wrestling: Port 2</li>
<li>Hat Trick Hero 2: Port 2</li>
<li>Head-On Soccer: Port 2</li>
<li>Hebereke no Oishii Puzzle ha Irimasenka: Port 2</li>
<li>Human Grand Prix III - F1 Triple Battle: Port 2</li>
<li>Human Grand Prix IV - F1 Dream Battle: Port 2</li>
<li>Hungry Dinosaurs: Port 2</li>
<li>International Superstar Soccer Deluxe: Port 2</li>
<li>J. League Excite Stage '94: Port 2</li>
<li>J. League Excite Stage '95: Port 2</li>
<li>J. League Excite Stage '96: Port 2</li>
<li>J. League Super Soccer '95: Port 2</li>
<li>J. League Super Soccer: Port 2</li>
<li>JWP Joshi Pro Wrestling - Pure Wrestle Queens: Port 2</li>
<li>Jikkyou Power Pro Wrestling '96: Port 2</li>
<li>Jimmy Connors Pro Tennis Tour: Port 2</li>
<li>Kunio-kun no Dodge Ball dayo Zenin Shuugou!: Port 2</li>
<li>Looney Tunes Basketball: Port 2</li>
<li>Madden NFL '94: Port 2</li>
<li>Madden NFL '95: Port 2</li>
<li>Madden NFL '96: Port 2</li>
<li>Madden NFL '97: Port 2</li>
<li>Madden NFL '98: Port 2</li>
<li>Micro Machines 2 - Turbo Tournament: Port 2</li>
<li>Micro Machines: Port 2</li>
<li>Mizuki Shigeru no Youkai Hyakkiyakou: Port 2</li>
<li>Multi Play Volleyball: Port 2</li>
<li>NBA Give 'N Go: Port 2</li>
<li>NBA Hang Time: Port 2</li>
<li>NBA Jam - Tournament Edition: Port 2</li>
<li>NBA Jam: Port 2</li>
<li>NBA Live 95: Port 2</li>
<li>NBA Live 96: Port 2</li>
<li>NBA Live 97: Port 2</li>
<li>NBA Live 98: Port 2</li>
<li>NCAA Final Four Basketball: Port 2</li>
<li>NCAA Football: Port 2</li>
<li>NFL Quarterback Club 96: Port 2</li>
<li>NFL Quarterback Club: Port 2</li>
<li>NHL '94: Port 2</li>
<li>NHL '98: Port 2</li>
<li>NHL Pro Hockey '94: Port 2</li>
<li>Natsume Championship Wrestling: Port 2</li>
<li>Peace Keepers, The: Port 2</li>
<li>Pieces: Port 2</li>
<li>Rap Jam - Volume One: Port 2</li>
<li>Saturday Night Slam Masters: Port 2</li>
<li>Secret of Mana: Port 2</li>
<li>Shin Nippon Pro Wrestling '94 - Battlefield in Tokyo Dome: Port 2</li>
<li>Shin Nippon Pro Wrestling - Chou Senshi in Tokyo Dome: Port 2</li>
<li>Shin Nippon Pro Wrestling Kounin '95 - Tokyo Dome Battle 7: Port 2</li>
<li>Smash Tennis: Port 2</li>
<li>Sporting News, The - Power Baseball: Port 2</li>
<li>Sterling Sharpe End 2 End: Port 2</li>
<li>Street Hockey '95: Port 2</li>
<li>Street Racer: Port 2</li>
<li>Sugoi Hebereke: Port 2</li>
<li>Sugoro Quest ++ Dicenics: Port 2</li>
<li>Super Bomberman - Panic Bomber W: Port 2</li>
<li>Super Bomberman 2: Port 2</li>
<li>Super Bomberman 3: Port 2</li>
<li>Super Bomberman 4: Port 2</li>
<li>Super Bomberman 5: Port 2</li>
<li>Super Bomberman: Port 2</li>
<li>Super Fire Pro Wrestling - Queen's Special: Port 2</li>
<li>Super Fire Pro Wrestling Special: Port 2</li>
<li>Super Fire Pro Wrestling X Premium: Port 2</li>
<li>Super Fire Pro Wrestling X: Port 2</li>
<li>Super Formation Soccer 94 - World Cup Final Data: Port 2</li>
<li>Super Formation Soccer 94: Port 2</li>
<li>Super Formation Soccer 95 della Serie A - UCC Xaqua: Port 2</li>
<li>Super Formation Soccer 95 della Serie A: Port 2</li>
<li>Super Formation Soccer 96: Port 2</li>
<li>Super Formation Soccer II: Port 2</li>
<li>Super Ice Hockey: Port 2</li>
<li>Super Kyousouba - Kaze no Sylphid: Port 2</li>
<li>Super Power League: Port 2</li>
<li>Super Tekkyuu Fight!: Port 2</li>
<li>Super Tetris 3: Port 2</li>
<li>Syndicate: Port 2</li>
<li>Tenryu Genichiro no Pro Wrestling Revolution: Port 2</li>
<li>Tiny Toon Adventures - Wild & Wacky Sports: Port 2</li>
<li>Top Gear 3000: Port 2</li>
<li>Turbo Toons: Port 2</li>
<li>Virtual Soccer: Port 2</li>
<li>Vs. Collection: Port 2</li>
<li>WWF Raw: Port 2</li>
<li>Yuujin no Furi Furi Girls: Port 2</li>
<li>Zero 4 Champ RR-Z: Port 2</li>
<li>Zero 4 Champ RR: Port 2</li>
</ul>
<p>C64 notes</p>
<p><a href="https://www.c64.wiki.com/index.php">c64 wiki</a></p>
<p>VICE with SDL</p>
<p>Coop games</p>
<ul>
<li>The Goonies</li>
<li>Realm of Impossibility</li>
<li>ACE Air Combat Emulator</li>
<li>Alien Syndrome</li>
<li>Armalyte</li>
<li>Bubble Bobble</li>
<li>Katakis</li>
<li>Mario Bros (Atari)</li>
<li>Mega Apocalypse</li>
<li>Castle s of Doctor Creep</li>
<li>Wizball</li>
</ul>
<p>Head to head</p>
<ul>
<li>MULE</li>
<li>Batty</li>
<li>Bomb Squad</li>
<li>Highlander</li>
<li>International Soccer</li>
<li>Pitstop II</li>
<li>Spy vs Spy</li>
<li>The way of the exploding fist</li>
<li>Trailblazer</li>
<li>Wizard of Wor</li>
</ul>
JavaScript resources
urn:uuid:276c94cb-bdc8-8940-1302-cb97867e63f2
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li><a href="http://www.codeproject.com/Articles/756189/Master-Chief-CreateJS-TypeScript">Typescript</a></li>
<li><a href="http://voxeljs.com/">voxeljs</a></li>
<li><a href="https://github.com/Irrelon/ige">ige</a></li>
</ul>
<p>More powerful github web pages</p>
<ul>
<li><a href="https://developer.github.com/v3/">github api</a></li>
<li><a href="https://javascriptweblog.wordpress.com/2010/11/29/json-and-jsonp/">json & jsonp</a></li>
<li><a href="http://stackoverflow.com/questions/26416727/cross-origin-resource-sharing-on-github-pages">cross origin resource sharing</a></li>
</ul>
<p>Use JavaScript XHR to get data from github API (CORS is enabled).</p>
<p>Show:</p>
<ul>
<li>
<p>Download counts for a project</p>
</li>
<li>
<p>Latest release tag</p>
</li>
<li>
<p><a href="http://www.typescriptlang.org/docs/tutorial.html">typescript tutorial</a></p>
</li>
<li>
<p><a href="https://www.devbridge.com/articles/say-hello-to-typescript/">intro to typescript</a></p>
</li>
</ul>
Programming 2016
urn:uuid:12750060-607b-549b-cbe5-bf6d022cd7e0
2024-03-05T00:00:00+01:00
Alejandro Liu
<h2 id="Programming+2016" name="Programming+2016">Programming 2016</h2>
<ol start="2">
<li>GWT and GWT on Mobile and Java servlets
<ul>
<li>Generate Excel
<a href="http://www.gwtproject.org/overview.html">http://www.gwtproject.org/overview.html</a>
<a href="http://www.m-gwt.com/">http://www.m-gwt.com/</a></li>
</ul></li>
</ol>
<p>Java based:</p>
<ul>
<li>
<p>Game Api <a href="https://libgdx.badlogicgames.com/">libgdx</a></p>
</li>
<li>
<p>Other Game lib <a href="https://jmonkeyengine.org/">JMonkeyEngine</a></p>
</li>
<li>
<p><a href="https://software.intel.com/en-us/multi-os-engine">MultiOS</a></p>
</li>
<li>
<p><a href="http://j2objc.org/">j2objc</a></p>
</li>
<li>
<p>RoboVM forks:</p>
<ul>
<li><a href="https://github.com/FlexoVM">FlexoVM</a></li>
<li>BugVM</li>
</ul>
</li>
<li>
<p>Swift?</p>
</li>
<li>
<p>D <a href="https://wiki.dlang.org/LDC">status</a></p>
</li>
<li>
<ul>
<li>
<ul>
<li>
</li>
</ul>
</li>
</ul>
</li>
<li>
</li>
</ul>
<p>Programming 2015</p>
<ul>
<li>Cross-Platform: Linux, Windows, Android, iOS, WebApp?</li>
<li>Run-Time: >100MB?</li>
<li>ease of deployment (wrap app and drops)</li>
<li>gui programming</li>
<li>object classes and types </li>
<li>memory management </li>
<li>speed</li>
<li>skills marketability </li>
<li>
</li>
</ul>
<p>Development </p>
<p><a href="http://hyperpolyglot.org/web">http://hyperpolyglot.org/web</a> - comparison between TypeScript, Dart, Hack (php like)</p>
<p>ANGULAR</p>
<ol>
<li><a href="https://angular.io/docs/ts/latest/quickstart.html">https://angular.io/docs/ts/latest/quickstart.html</a></li>
<li><a href="https://angular.io/docs/ts/latest/tutorial/">https://angular.io/docs/ts/latest/tutorial/</a></li>
</ol>
<p>TypeScript</p>
<ul>
<li>
<p>Headers: <a href="http://definitelytyped.org/">http://definitelytyped.org/</a></p>
</li>
<li>
<p>TypeScript - compiled, optionally typed language that compiles to JavaScript</p>
</li>
<li>
<p>node-webkit - Desktop apps</p>
</li>
<li>
<p>ionic framework - deploy to phone</p>
</li>
</ul>
<p>Frameworks:</p>
<ul>
<li>
<p><a href="https://angular.io/">https://angular.io/</a> - JavaScript framework for web apps</p>
</li>
<li>
<p>jQuery</p>
</li>
<li>
<p>"app.js" : This is a UI library for writing mobile apps</p>
</li>
<li>
<p>TypeScript?</p>
</li>
<li>
<p><a href="http://blog.scottlogic.com/2014/09/10/node-webkit.html">http://blog.scottlogic.com/2014/09/10/node-webkit.html</a></p>
</li>
<li>
<p>Angular.JS? <a href="https://angular.io/">https://angular.io/</a></p>
</li>
<li>
<p>React Native <a href="https://facebook.github.io/react-native/">https://facebook.github.io/react-native/</a></p>
</li>
<li>
<p><a href="http://appjs.com/">http://appjs.com/</a> - </p>
</li>
<li>
<p>Enyo? <a href="http://enyojs.com/">http://enyojs.com/</a></p>
</li>
<li>
<p><a href="http://noeticforce.com/best-hybrid-mobile-app-ui-frameworks-html5-js-css">http://noeticforce.com/best-hybrid-mobile-app-ui-frameworks-html5-js-css</a></p>
</li>
</ul>
<p>RunTimes:</p>
<ul>
<li>NW.js</li>
<li>electron (<a href="https://github.com/atom/electron">https://github.com/atom/electron</a>)</li>
</ul>
<p>Facebook's React Native</p>
<p>JavaScript supersets:</p>
<ul>
<li>TypeScript</li>
<li>Dart</li>
<li>CoffeeScript</li>
</ul>
<p>Translateable:</p>
<ul>
<li>Google Web Toolkit (Java to JavaScript)</li>
<li>Pyjamas (Python to Javascript)</li>
<li>HaXe</li>
</ul>
<hr />
<p>Dev Notes</p>
<p>Replacment for Make and Autoconf:
<a href="https://embedthis.com/makeme/">MakeMe</a></p>
<p>(If you don't have root but have Android 4+ you can use the command-line program adb from the Android SDK platform tools to make backups via a desktop computer)</p>
<p><a href="http://www.chromebookhq.com/five-best-online-ides-making-the-switch-to-a-chromebook/">http://www.chromebookhq.com/five-best-online-ides-making-the-switch-to-a-chromebook/</a></p>
<h2 id="Dev+Tools" name="Dev+Tools">Dev Tools</h2>
<p>Alternative languages:</p>
<ul>
<li>D : better than C, but not over-the-top like C++? Covers only Win and Linux</li>
<li>Vala : Kinda like C# but for Gnome. Covers Win and Linux. (Android maybe through NDK).</li>
<li>Java: Kinda over the top and heavy. Covers Win and Linux. Android yes, but different GUI library. iOS probably yes.</li>
<li>Python: scripting language. Win, Linux. Android maybe... iOS maybe...</li>
<li>Javascript: scripting language. ALL PLATFORMS.</li>
</ul>
<p>Other options:</p>
<ul>
<li>Python with <a href="http://kivy.org/">Kivy</a></li>
<li><a href="http://haxe.org/">Haxe</a></li>
</ul>
<h2 id="Build+Tools" name="Build+Tools">Build Tools</h2>
<ul>
<li>MakeKit - autotools look & feel but lighter</li>
<li><a href="http://www.dervishd.net/libre-software-projects">mobs</a>: autoconf workalike.</li>
</ul>
<h2 id="Resources" name="Resources">Resources</h2>
<ul>
<li><a href="http://www.dervishd.net/libre-software-projects">http://www.dervishd.net/libre-software-projects</a>
syslogd in perl, mobom perl modules.</li>
</ul>
<h1>My own Notes App</h1>
<p>JumpNote + OI Notpad
(Background (Tags support)
Sync)
V
Simple Note backend
V
Tags UI
(Filter, modify tags)
V
Task UI
V
Widget</p>
<p>WebApp + Mobile Dev:</p>
<ul>
<li><a href="http://demux.vektorsoft.com/demux/">http://demux.vektorsoft.com/demux/</a>
A Java framework that works on multiple platforms.</li>
<li><a href="http://asterclick.drclue.net/WBEA.html">http://asterclick.drclue.net/WBEA.html</a>
Allows for webapps on desktops</li>
<li>PhoneGap</li>
<li><a href="http://www.mobilexweb.com/emulators">http://www.mobilexweb.com/emulators</a>
Test mobile apps on desktop</li>
<li>Javascript optimizer:
<a href="https://developers.google.com/closure/">https://developers.google.com/closure/</a>
<a href="https://github.com/mishoo/UglifyJS">https://github.com/mishoo/UglifyJS</a></li>
<li>JS Compiler: <a href="https://developer.mozilla.org/en/Rhino_JavaScript_Compiler">https://developer.mozilla.org/en/Rhino_JavaScript_Compiler</a></li>
<li>Java 2 JS Toolkits:
<a href="http://code.google.com/webtoolkit/">http://code.google.com/webtoolkit/</a>
<a href="http://j2s.sourceforge.net/">http://j2s.sourceforge.net/</a></li>
<li>Python 2 JS Toolkigs:
<a href="http://pyjs.org/">http://pyjs.org/</a></li>
<li>JS Compiler for command line:
<a href="https://developers.google.com/v8/">https://developers.google.com/v8/</a>
<a href="http://en.wikipedia.org/wiki/Nodejs">http://en.wikipedia.org/wiki/Nodejs</a></li>
<li><a href="http://this-voice.org/alchemy/pride.html">http://this-voice.org/alchemy/pride.html</a> Compiling Android stuff</li>
</ul>
<p>Documentation around Syncing...</p>
<ul>
<li><a href="http://ericmiles.wordpress.com/2010/09/22/connecting-the-dots-with-android-syncadapter/">http://ericmiles.wordpress.com/2010/09/22/connecting-the-dots-with-android-syncadapter/</a></li>
<li><a href="http://developer.android.com/resources/samples/SampleSyncAdapter/index.html">http://developer.android.com/resources/samples/SampleSyncAdapter/index.html</a></li>
</ul>
<p>Other Notes:</p>
<ul>
<li>Perki replacement that runs on Android.</li>
<li>Use WebKit/PhoneGap + Javascript and HTML5</li>
<li>Markdown library for Javascript</li>
<li>Markdown editor for javscript</li>
<li>TXGR converted to HTML5 Canvas</li>
<li>How do we do background sync?</li>
</ul>
<p>More example code:</p>
<ul>
<li><a href="http://code.google.com/p/jumpnote/">http://code.google.com/p/jumpnote/</a></li>
<li><a href="http://www.java2s.com/Open-Source/Android/CatalogAndroid.htm">http://www.java2s.com/Open-Source/Android/CatalogAndroid.htm</a></li>
</ul>
<p>We want to have it for Android, Linux and Windows.</p>
<p><a href="http://libreplanet.org/wiki/Group:Hardware/Howto_have_a_free_android_sdk">http://libreplanet.org/wiki/Group:Hardware/Howto_have_a_free_android_sdk</a></p>
<p>We need to research:</p>
<pre><code>* Alternative to freewrap
* http://jsmooth.sourceforge.net/
* http://launch4j.sourceforge.net/
* http://www.thisiscool.com/gcc_mingw.htm
* http://vertis.github.com/2007/06/24/native-java-with-gcj-and-swt.html
* http://winrun4j.sourceforge.net/
* Alternative to Canvas
* http://www.piccolo2d.org/
* http://www.jhotdraw.org/
* http://www.manageability.org/blog/stuff/open-source-structured-graphics-libraries-in-java</code></pre>
<p>Contains an overview of options...</p>
<ul>
<li>
<p><a href="http://jean-philippe.leboeuf.name/notebook/archives/000315.html">http://jean-philippe.leboeuf.name/notebook/archives/000315.html</a>
Another overview of options</p>
</li>
<li>
<p>Which Toolkit to use (SWT, Swing, AWT, etc)</p>
</li>
</ul>
<p>An alternative to Eclipse for Android Development:</p>
<p><a href="http://freecode.com/projects/pride">http://freecode.com/projects/pride</a></p>
<p>A freewrap like tool for pythonL</p>
<p><a href="http://freecode.com/projects/pyinstaller">http://freecode.com/projects/pyinstaller</a></p>
<p>More Android Dev options:</p>
<ul>
<li>PhoneGAP</li>
<li><a href="http://kivy.org/">http://kivy.org/</a> Python, multi platform</li>
<li><a href="https://code.google.com/p/android-python27/w/list">https://code.google.com/p/android-python27/w/list</a> - Python on android</li>
</ul>
Managing our personal finances
urn:uuid:4c85ab8a-bfbb-bacf-61b8-183c5a907cc2
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>During my last vacation I wanted to move how we manage our personal
finances away from the ad-hoc spreadsheet that we had been using for
the past few years. I envisioned something server side, so I wouldn't
need to add software on my wife's computer. And initial quick run
through of server side software did not yield anything that interested
me. In general I could only find <em>full</em> accounting applications, which
would have been an over kill for personal finance/expense tracking. So
then I checked through some desktop applications. I looked at the
following:</p>
<ul>
<li><a href="http://homebank.free.fr/">HomeBank</a></li>
<li><a href="http://www.grisbi.org/">Grisbi</a></li>
</ul>
<p>I found many others, but I only tried these two. The most common
suggestion among Open Source advocates is <a href="https://www.gnucash.org/">GNU Cash</a>,
but I did not try that because it was too big for my modest
requirements. I installed these two but I was not able to get it to do
what I wanted. Which was be able to import transactions from my back
and entered into the the application. So I went back searching to the
web this time looking for "personal finance" instead of "accounting"
and found this web application (amongst others):</p>
<ul>
<li><a href="https://sourceforge.net/projects/pfmgr/">PFMGR</a></li>
</ul>
<p>So I installed it and was able to run it on my home server. (This was
the first of these types of application that I managed to run, so I was
initially happy). So, running it, it looked OK, had an AJAX based user
interface, etc. Did not have anything in the way to import the data
files from my bank, but since it is Open Source, I could easily make
up something for it. So I modified it to include a page to import my
bank data. This seemed to work OK. That's when the annoyances started.
<strong>PFMGR</strong> author had an specific use case in mind, so it can track not
only money accounts but share accounts. While nice, I did not have
such investments, so that feature was unused, but it will show on the
forms (annoying). A lot of the functionality of the software was around
check reconciliation. Since I don't use checks, that is not useful for
me at all. Finally, I couldn't get the reports to work at all, and the
times that they did work, they did not give me the information that I
wanted. I figured, since this is all open source I could just add/remove
the features the way I wanted. Which curiously turned out I would remove
all the features and just keep <strong>PFMGR</strong> as a simple CRUD application.
So I figure I might as well toss it all out and find a small PHP
Framework that could do CRUD. So I came across this tutorial:</p>
<ul>
<li><a href="https://foysalmamun.wordpress.com/2013/03/27/fat-free-crud-with-mvc-tutorial/">FatFree :: CRUD with MVC</a></li>
</ul>
<p>So it gives a gentle intro to the <a href="http://fatfreeframework.com/home">FatFreeFramework</a>.
This was just what I was looking for. I can say for simple applications,
this is perfect. I was able to get started into my own personal finance
application. Did run into a few problems. Most of it around the fact that
Fat-Free (aka as F3), although has a very gentle learning curve, and one
can get things started very quickly, it did a few things that I was not
expecting. Well, actually, for a novice programmer, it did things right.
For somebody used to using PHP directly, I would add some code to escape
and protect against invalid inputs, F3 was doing it automatically which
caused me a few headaches until I realized what F3 was doing. My main
problem with F3 is that my schema required a wide varchar column and
that seemed to cause problem with its ORM mapper. Later I will write
a simple test case and see if I can track down the issue.</p>
Starting with 3D Printing
urn:uuid:4422c0e0-33ba-3c40-1ed3-f2247f6e73bb
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So I finally tried my hand at 3D printing. Obviously I did not buy at 3D printer. These are either quite expensive or you need to assemble them yourself, which I don't think is in my capacity level.</p>
<p>To get started, you first need a 3D model to print. There are several 3D models available in <a href="http://www.thingiverse.com/">Thingieverse</a>, however I actually wanted to make my own model. After all, that is the whole point of 3D printing. Custom made parts/objects that can be printed as needed.</p>
<p>To create a 3D model you need some 3D modelling software. For my very first model I opted for <a href="https://www.tinkercad.com/">TinkerCAD</a>. This is software that runs on the cloud that lets you create your own 3D models. This is particularly interesting because you don't need to install anything on your computer and it would essentially run on anything where a web browser runs.</p>
<p>For a web based application, it is quite responsive and feature-full. You can use (like me) a <a href="http://www.facebook.com/">facebook</a> account to sign-in.</p>
<p>Models can then be downloaded as an ".stl" file (the format used by 3D printers) or send directly to 3D printing service such as <a href="https://www.3dhubs.com/">3D Hubs</a>.</p>
<p><a href="https://www.3dhubs.com/">3D Hubs</a>, is an online 3D Printing service which facilitates transactions betwen 3D Printer owners (Hubs) and people who want to make 3D prints. Printer owners can join the platform to offer 3D printing services while customers can locate printer owners to get their 3D models printed nearby.</p>
<p>So what I did myself is design a soap dish. The one we had in our shower was glass which fell and broke. I tried looking for a replacement in several hardware stores but came up empty. So, this was a perfect use case for 3D printing.</p>
<p>This was the end result:</p>
<p><img src="/images/2016/soapdish.png" alt="dish" /></p>
<p>Some learning points for this:</p>
<ul>
<li>While support material can be used to create complex shapes, the results is not as smooth as I would hope for.</li>
<li>The soap dish was a very thigh fit in the holder in the shower. This is good, because you can make a very precision part, it also means that accurate measurements (in some cases less than 1 millimeter) are very important.</li>
</ul>
<p>So, allthough <a href="https://www.tinkercad.com/">TinkerCAD</a> is quite usable, it is very much an entry level tool. I have since switched to using <a href="http://www.artofillusion.org/">Art of Illusion</a> which is harder to use, but allows for larger, more complex models. Also, it allows to create shapes by specifying coordinates and sizes by typing a floating point value. This is important because you can get very accurate measurements that way (as opposed to dragging shapes with the mouse, which can't do that accurately).</p>
OpenShift notes
urn:uuid:2cab8a03-8b2b-9690-35fe-a163ded81f8f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><strong>THIS IS FOR ARCHIVAL PURPOSES. THIS IS OUT-OF-DATE</strong></p>
<h3 id="backup+OpenShift" name="backup+OpenShift">backup OpenShift</h3>
<pre><code>openshift getenv(USER) from OpenShift php
ssh to {user}@{app-domain} gear snapshot > file</code></pre>
<p>Run gear app</p>
<p>OpenShift migration further notes</p>
<p>Encrypt a file using a supplied password :</p>
<pre><code>$ openssl enc -aes-256-cbc -salt -in file.txt -out file.txt.enc -k PASS</code></pre>
<p>Decrypt a file using a supplied password :</p>
<pre><code>$ openssl enc -aes-256-cbc -d -in file.txt.enc -out file.txt -k PASS</code></pre>
<p>Add script to tar plugin, then rhc a SSH key into account,
ssh to account and tar x data...</p>
<ol>
<li>get target directory</li>
<li>clean-up target directory</li>
<li>extract new contents</li>
</ol>
<p>Probably can use deploy as an example...</p>
<ul>
<li>need to list images that have been uploaded to S3.</li>
<li>need to convert imgsrc references...</li>
</ul>
<p>? wordpress filters vs hooks ?</p>
<p>Sample stuff:</p>
<ul>
<li><a href="https://codex.wordpress.org/Writing_a_Plugin#Saving_Plugin_Data_to_the_Database">writing a plugin</a></li>
<li><a href="https://www.sitepoint.com/working-with-databases-in-wordpress/">working with database</a></li>
<li><a href="https://premium.wpmudev.org/blog/creating-database-tables-for-plugins/">creating database</a></li>
</ul>
<p>Skeleton</p>
<ul>
<li><a href="https://github.com/convissor/oop-plugin-template-solution">plugin template</a></li>
<li><a href="http://wordpress.stackexchange.com/questions/44708/using-a-plugin-class-inside-a-template">plugin class</a></li>
<li><a href="http://www.yaconiello.com/blog/how-to-write-wordpress-plugin/">how to write wordpress plugin</a></li>
</ul>
<p><a href="http://wordpress.stackexchange.com/questions/35931/how-can-i-edit-post-data-before-it-is-saved">Mangle data when saving</a></p>
<p><a href="https://codex.wordpress.org/Post_Types">post types</a>
... perhaps we add an S3 flag to the post type: Attachment</p>
<h2 id="WordPress" name="WordPress">WordPress</h2>
<p>Standard Customizations:</p>
<ol>
<li>Appearance
<ul>
<li>Set theme</li>
<li>Site Identity</li>
<li>Header & Background iamge</li>
<li>Menus: Set-up top bar?</li>
</ul></li>
<li>Settings
<ul>
<li>General Settings
<ol>
<li>Membership: Not anyone can register</li>
<li>Timezone UTC</li>
<li>Date/Time Format</li>
</ol></li>
<li>Reading
<ol>
<li>For each article in the feed: Show full text</li>
</ol></li>
<li>Discussion
<ol>
<li>Users must be registered to comment. Not fill out name+email</li>
<li>Comments author must have previously approved comment</li>
</ol></li>
<li>Permalinks
<ol>
<li>Month and name</li>
</ol></li>
</ul></li>
</ol>
<p>Plugins</p>
<ul>
<li>Front Page Category
<ul>
<li>Customizer, Front Page Categories, select what to show</li>
</ul></li>
<li>Collapsing category list
<ul>
<li>Customizer, Widgets, Categories, customize...</li>
</ul></li>
<li>bbPress
<ul>
<li>NO anonymous posting</li>
</ul></li>
<li>WP Social Login
<ul>
<li>Bouncer</li>
<li>Allow Username change</li>
</ul></li>
<li>Rich Revies
<ul>
<li>Integrate user accounts</li>
</ul></li>
</ul>
<h2 id="OpenShift+Recipe" name="OpenShift+Recipe">OpenShift Recipe</h2>
<p>The official deploy tool <a href="https://github.com/travis-ci/dpl">dpl</a> does not
seem to work with secondary branches.</p>
<h3 id="Pre-requisistes" name="Pre-requisistes">Pre-requisistes</h3>
<ol>
<li>Install git</li>
<li>Install RHC command line
<ul>
<li>yum install epel-release</li>
<li>yum install rubygem-rhc</li>
</ul></li>
<li>Install Travis command line
<ul>
<li>yum install epel-release</li>
<li>yum install ruby-devel rubygem-ffi (maybe others)</li>
<li>gem install travis -v 1.8.2 --no-rdoc --no-ri</li>
</ul></li>
<li>A github, travis-ci and opens</li>
</ol>
<h3 id="Preparing+Repo" name="Preparing+Repo">Preparing Repo</h3>
<p>This section can be skipped if we already have a github repo.</p>
<ol>
<li>Fork <a href="https://github.com/openshift/wordpress-example.git">wordpress-example</a></li>
<li>Create any additional branches as needed.</li>
<li>Configure travis-ci by creating a basic <code>.travis.yml</code>
<ul>
<li>language: php</li>
<li>php:</li>
<li>
<ul>
<li>'5.4'</li>
</ul>
</li>
<li>script: true</li>
</ul></li>
<li>Since <code>travis setup openshift</code> doesn't work, we need to use the DIY
deploy script. So make a copy of it. And configure:
<ul>
<li>env:</li>
<li>_ global:</li>
<li>__ OPENSHIFT_USER=$username</li>
<li>__ OPENSHIFT_SECRET=$secret
<ol>
<li>Obviously the secret should be encrypted using:
<ul>
<li>travis encrypt OPENSHIFT_SECRET=$secret [--add env.global]</li>
</ul></li>
</ol></li>
<li>script:</li>
<li>
<ul>
<li>sh deploy.sh</li>
</ul>
</li>
<li>diydeploy:</li>
<li>
<ul>
<li>deploy $branch:$openshift_app ... initially empty...</li>
</ul>
</li>
</ul></li>
</ol>
<h3 id="Deploying+Repo+to+OpenShift+App" name="Deploying+Repo+to+OpenShift+App">Deploying Repo to OpenShift App</h3>
<ol>
<li>Create a new Application from the Openshift
<a href="https://www.openshift.com/">console</a>.
<ul>
<li>Use (WordPress 4)</li>
<li>Just leave initial repo to the default</li>
<li>Decide on scaling options.</li>
<li>DO NOT GO THROUGH SITE INSTALL YET!</li>
</ul></li>
<li>Add the $branch:$openshift_app to the <code>.travis.yml</code>, and push so
travis-ci will deploy.</li>
<li>Tweak configuration:
<ul>
<li>force https through .htaccess.</li>
<li>Enable MULTISITE (if needed)</li>
</ul></li>
<li>Enable custom domain
<ul>
<li>Create Domain Name (on DNS) and add custom domain in OpenShift</li>
<li>Add Certificate to OpenShift (self-signed or maybe CloudFlare)
<ul>
<li>openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 3650 -nodes</li>
</ul></li>
<li>Wait for DNS to propagate</li>
</ul></li>
<li>Log-on to the site and go through installation.
<ul>
<li>Verify that URLs use https:</li>
<li>Dashboard -> Settings -> General</li>
<li>Verify in Permalinks that https is used.</li>
</ul></li>
</ol>
<hr />
<p>Fork syncing</p>
<pre><code> - Clone repo
- Configure a remote fork
1. git remote -v
2. git remote add upstream https://github.com/openshift/wordpress-example.git
- Syncing a fork
1. git fetch upstream
2. git checkout master
3. git merge upstream/master
//define('DOMAIN_CURRENT_SITE', 'dev.iliu.net');
if ($_SERVER['SERVER_NAME'] != 'dev.iliu.net') {
define('DOMAIN_CURRENT_SITE', 'iliu.net');
} else {
define('DOMAIN_CURRENT_SITE', 'dev.iliu.net');
}</code></pre>
<h2 id="openshift+mailgun" name="openshift+mailgun">openshift mailgun</h2>
<p>Success! You're signed up and we just created your sandbox server sandboxf9dbaaa2f22a49a693955138381837e7.mailgun.org</p>
<h2 id="Include+the+Autoloader+%28see+%26quot%3BLibraries%26quot%3B+for+install+instructions%29" name="Include+the+Autoloader+%28see+%26quot%3BLibraries%26quot%3B+for+install+instructions%29">Include the Autoloader (see "Libraries" for install instructions)</h2>
<pre><code>require 'vendor/autoload.php';
use Mailgun\Mailgun;
# Instantiate the client.
$mgClient = new Mailgun('key-xxxxxxxxxxxxxxxxxxxxxxxxx');
$domain = "sandboxf9dbaaa2f22a49a693955138381837e7.mailgun.org";
# Make the call to the client.
$result = $mgClient->sendMessage("$domain",
array('from' => 'Mailgun Sandbox <postmaster@sandboxf9dbaaa2f22a49a693955138381837e7.mailgun.org>',
'to' => 'Alejandro Liu <alejandro_liu@hotmail.com>',
'subject' => 'Hello Alejandro Liu',
'text' => 'Congratulations Alejandro Liu, you just sent an email with Mailgun! You are truly awesome! You can see a record of this email in your logs: https://mailgun.com/cp/log . You can send up to 300 emails/day from this sandbox server. Next, you should add your own domain so you can send 10,000 emails/month for free.'));</code></pre>
<hr />
<ul>
<li><a href="https://blog.openshift.com/free-paas-email-server-with-roundcube/">free paas mail server</a></li>
<li><a href="https://blog.openshift.com/email-in-the-cloud-with-mailgun/">mailgun</a></li>
<li><a href="https://mailgun.com/signup?plan=free">mailgun plan</a></li>
<li><a href="https://www.gregjs.com/linux/2015/forwarding-mail-to-your-gmail-account-with-mailgun/">forwarding with mailgun</a></li>
</ul>
Windows administration from the command line
urn:uuid:327185a6-f93d-43d1-dee6-6ab553c212c8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Windows system administration is very mouse driven and to reach
all tools you need to browse through Windows explorer.</p>
<p>If you are like me and prefer to log on a limited privilege account and use Runas to perform admin tasks, you can open these consoles with the .msc file names.</p>
<p>Here is a list of admin tools with their .msc file names.</p>
<ul>
<li>domain.msc: AD Domains and Trusts</li>
<li>admgmt.msc: Active Directory Management</li>
<li>dssite.msc: AD Sites and Serrvices</li>
<li>dsa.msc: AD Users and Computers</li>
<li>adsiedit.msc: ADSI Edit</li>
<li>azman.msc: Authorization manager</li>
<li>certsrv.msc: Certification Authority Management</li>
<li>certtmpl.msc: Certificate Templates</li>
<li>cluadmin.exe: Cluster Administrator</li>
<li>compmgmt.msc: Computer Management</li>
<li>comexp.msc: Component Services</li>
<li>cys.exe: Configure Your Server</li>
<li>devmgmt.msc: Device Manager</li>
<li>dhcpmgmt.msc: DHCP Managment</li>
<li>dfrg.msc: Disk Defragmenter</li>
<li>diskmgmt.msc: Disk Manager</li>
<li>dfsgui.msc: Distributed File System</li>
<li>dnsmgmt.msc: DNS Managment</li>
<li>eventvwr.msc: Event Viewer</li>
<li>ciadv.msc: Indexing Service Management</li>
<li>ipaddrmgmt.msc: IP Address Management</li>
<li>llsmgr.exe: Licensing Manager</li>
<li>certmgr.msc: Local Certificates Management</li>
<li>gpedit.msc: Local Group Policy Editor</li>
<li>secpol.msc: Local Security Settings Manager</li>
<li>lusrmgr.msc: Local Users and Groups Manager</li>
<li>nlbmgr.exe: Network Load balancing</li>
<li>perfmon.msc: Performance Monitor</li>
<li>pkiview.msc: PKI Viewer</li>
<li>pkmgmt.msc: Public Key Managment</li>
<li>acssnap.msc: QoS Control Management</li>
<li>tsmmc.msc: Remote Desktops</li>
<li>rsadmin.msc: Remote Storage Administration</li>
<li>ntmsmgr.msc: Removable Storage</li>
<li>ntmsoprq.msc: Removable Storage Operator Requests</li>
<li>rrasmgmt.msc: Routing and Remote Access Manager</li>
<li>rsop.msc: Resultant Set of Policy</li>
<li>schmmgmt.msc: Schema management</li>
<li>services.msc: Services Management</li>
<li>fsmgmt.msc: Shared Folders</li>
<li>sidwalk.msc: SID Security Migration</li>
<li>tapimgmt.msc: Telephony Management</li>
<li>tscc.msc: Terminal Server Configuration</li>
<li>licmgr.exe: Terminal Server Licensing</li>
<li>tsadmin.exe: Terminal Server Manager</li>
<li>uddi.msc: UDDI Services Managment</li>
<li>wmimgmt.msc: Windows Mangement Instumentation</li>
<li>winsmgmt.msc: WINS Server manager</li>
</ul>
Deploying Kerberos based SSO
urn:uuid:7daa016a-f53b-0a75-af2a-fbd8a86cc458
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article goes over how to implement Single-Sign-On
on Linux. It goes over the integration around
the Kerberos service and the applications, like for example
FireFox.</p>
<h3 id="Pre-requisites" name="Pre-requisites">Pre-requisites</h3>
<ul>
<li>Kerberos Domain Controller (KDC)</li>
<li>User accounts in the KDC</li>
<li>KDC based logins</li>
</ul>
<p>To make sure that this is working, login to your workstation using your kerberos password and use the command:</p>
<pre><code>klist</code></pre>
<p>This should show your principals assigned to you.</p>
<pre><code>Ticket cache: FILE:/tmp/krb5cc_XXXX_ErVb5X
Default principal: zzzz@LOCALNET
Valid starting Expires Service principal
01/11/2016 15:51:35 01/12/2016 15:51:34 krbtgt/LOCALNET@LOCALNET</code></pre>
<h3 id="Configuring+Apache" name="Configuring+Apache">Configuring Apache</h3>
<ol>
<li>Install any necessary modules on the server:
<ul>
<li><code>yum install mod_auth_kerb</code></li>
</ul></li>
<li>Create a service principal for the web server (this needs to be done on the KDC.
<ul>
<li><code>kadmin.local -q "addprinc -randkey HTTP/www.example.com</code></li>
</ul></li>
<li>Export the encpryption keys to a keytab:
<ul>
<li><code>kadmin.local -q "ktadd -k /tmp/http.keytab HTTP/www.example.com</code></li>
</ul></li>
<li>Copy <code>/tmp/http.keytab</code> to the webserver at <code>/etc/httpd/http.keytab</code>.</li>
<li>Set ownership and permissions:
<ul>
<li><code>chmod 600 /etc/httpd/http.keytab</code></li>
<li><code>chown apache /etc/httpd/http.keytab</code></li>
</ul></li>
<li>Enable authentication, configure this:
<ul>
<li><code>AuthType Kerberos</code></li>
<li><code>AuthName "Acme Corporation"</code></li>
<li><code>KrbMethodNegotiate on</code></li>
<li><code>KrbMethodK5Passwd off</code></li>
<li><code>Krb5Keytab /etc/httpd/http.keytab</code></li>
<li><code>require valid-user</code></li>
</ul></li>
<li>Re-start apache</li>
</ol>
<h3 id="Configure+FireFox" name="Configure+FireFox">Configure FireFox</h3>
<ol>
<li>Navigate to <code>about:config</code></li>
<li>Search for: <code>negotiate-auth</code></li>
<li>Double click on <code>network.negotiate-auth.trusted-uris</code>.</li>
<li>Enter hostname's, URL prefixes, etc, separated by commas. Examples:
<ul>
<li>www.example.com</li>
<li><a href="http://www.example.com/">http://www.example.com/</a></li>
<li>.example.com</li>
</ul></li>
</ol>
<p>It is possible to configure this setting for all users by creating a global config file:</p>
<ol>
<li>Find configuration directory:
<ul>
<li><code>rpm -q firefox -l | grep preferences</code></li>
</ul></li>
<li>Create a javascript file in that directory. (by convention, <code>autoconfig.js</code>; other file names will work, but for best results it should be early in the alphabet.)</li>
<li>Add the following line:
<ul>
<li><code>pref("network.negotiate-auth.trusted-uris",".example.com");</code></li>
</ul></li>
</ol>
<h3 id="Configure+OpenSSH+server" name="Configure+OpenSSH+server">Configure OpenSSH server</h3>
<ol>
<li>Create a service principal for the host (this needs to be done on the KDC.
<ul>
<li><code>kadmin.local -q "addprinc -randkey host/shell.example.com</code></li>
</ul></li>
<li>Export the encpryption keys to a keytab:
<ul>
<li><code>kadmin.local -q "ktadd -k /tmp/krb5.keytab host/shell.example.com</code></li>
</ul></li>
<li>Copy <code>/tmp/krb5.keytab</code> to the host at: <code>/etc/krb5.keytab</code>.</li>
<li>Set ownership and permissions:
<ul>
<li><code>chmod 600 /etc/krb5.keytab</code></li>
<li><code>chown root /etc/krb5.keytab</code></li>
</ul></li>
<li>Enable authentication, change these settings in <code>/etc/ssh/sshd_config</code>:
<ul>
<li><code>KerberosAuthentication yes</code></li>
<li><code>GSSAPIAuthentication yes</code></li>
<li><code>GSSAPICleanupCredentials yes</code></li>
<li><code>UsePAM no</code> <em># This is not supported by RHEL7 and should be left as <code>yes</code></em></li>
</ul></li>
<li>Restart <code>sshd</code>.</li>
</ol>
<h3 id="Configure+OpenSSH+clients" name="Configure+OpenSSH+clients">Configure OpenSSH clients</h3>
<p>Configure <code>/etc/ssh_config</code> or <code>~/ssh/ssh_config</code>:</p>
<pre><code>Host *.localnet
GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes</code></pre>
clipping ideas
urn:uuid:2f1cfc69-72cf-5bf5-8b70-ffccb95cb860
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li>Divide into
<ul>
<li>Work : Only visible to company and clients</li>
<li>Personal: Public/Private areas</li>
</ul></li>
</ul>
<p>Features:</p>
<ul>
<li>Send e-mail to address => creates entry</li>
<li>handle attachments</li>
<li>Rich text support?</li>
<li>Markdown through short codes (maybe)</li>
<li>Searchable</li>
<li>Auto Tag/Auto Categorize</li>
<li>Can create entries through UI</li>
</ul>
<p>Options:</p>
<ul>
<li>MHonArc</li>
<li>WordPress + WebMail posting</li>
</ul>
Let's Encrypt
urn:uuid:d55115a7-d987-2f5a-bcd7-b65b631ab2c5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a service that let's you get SSL certificates for HTTPS. These certificates are trusted by major browsers. See <a href="https://letsencrypt.org/about/">Let's Encrypt</a> This is a barebones <em>howto</em> to get SSL certificates:</p>
<pre><code>git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt</code></pre>
<p>This contains the client software for let's encrypt.</p>
<pre><code>./letsencrypt-auto certonly --manual</code></pre>
<p>This will start by updating and getting any needed dependencies and then jump to a <em>wizard</em> like configuration to get this done. Follow the prompts and pay special attention on the prompt used to validate your domain. (You need to create a couple of folders and a file with the right content). Afterwards your certificates will be in:</p>
<pre><code>/etc/letsencrypt/live/mydomain.tld</code></pre>
<p>Then go to your CPanel configuration, then upload:</p>
<ul>
<li><code>privkey.pem</code> to <strong>Private Keys</strong></li>
<li><code>cert.pem</code> to <strong>Certificates</strong></li>
</ul>
<p>Then you go to <strong>Manage SSL Hosts -> Browse Certificates</strong>, pick the right certificate. Then paste <code>chain.pem</code> (from /etc/letsencrypt/live/mydomain.tld) to the CA Bundle box.</p>
undup
urn:uuid:68c3298f-b686-bb03-95b0-d39fd0965547
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So, after a long while, I wrote a new C language program. As usual,
the same things that I dislike about C programming popped up,
specifically the need for low level data structures and manual
memory management.</p>
<p>I did learn some new things:</p>
<ul>
<li><a href="https://github.com/troydhanson/uthash/">uthash</a> : I have used this library before, but there were a few new features that I did not know before, specifically it not only includes the hash library, but also some other <em>high level</em> structures that were quite handy.</li>
<li>Unit testing : So I started using <a href="https://github.com/danfis/cu/">cu</a>, a C unit testing library. Frst time I write a program with integrated unit-testing. I can see its usefulness, but it does feel like a lot of work. For a casual programmer like myself, does feel like an over-kill.</li>
<li>Continuous integration with <a href="http://travis-ci.org/alejandroliu/undup">Travis-CI</a> : For this project I tried using a CI tool. I chose <a href="http://travis-ci.org/">Travis-CI</a> because it integrates with <a href="http://github.com/">GitHub</a>. This only makes sense with unit testing. Once again, for a casual programmer like myself, it feels like a bit too much, but I can see how it would be useful if you have multiple contributors to the same project repository.</li>
<li>Creating binaries for a Zyxel NSA 325 v2 : So I got the NSA 325v2 SDK, and I am cross compiling for it. Quite straightforward, but still, something new.</li>
<li>An interesting feature of this code, is that, when possible, it is object oriented.</li>
</ul>
<p>Anyway, this project can be found in github:</p>
<ul>
<li><a href="https://github.com/alejandroliu/undup">Undup github repository</a></li>
</ul>
Markdown Javascript editors
urn:uuid:0fc8bbb7-761c-8ded-a624-976cb565c313
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://vuejs.org/">VUE JS</a>: Includes a Markdown editor example that allows edit with online preview next to it</p>
<p><a href="http://epiceditor.com/">Embeddable JS Markdown editor</a> : Has a button to
preview</p>
<p>Editors that edit in preview-like mode</p>
<ul>
<li><a href="https://github.com/lepture/editor">editor</a></li>
<li><a href="https://github.com/NextStepWebs/simplemde-markdown-editor/">simplemde</a></li>
<li><a href="https://github.com/jbt/markdown-editor">markdown</a> (With GFM)</li>
</ul>
Picade Todo
urn:uuid:72deb563-5ecf-2d13-a5a0-2277e506c8bc
2024-03-05T00:00:00+01:00
Alejandro Liu
<ol>
<li><a href="http://forums.pimoroni.com/t/picade-pcb-emulator-key-mapping/922">key mappings</a>
<ul>
<li>look up and label default mappings</li>
</ul></li>
</ol>
<pre><code> { KEY_UP_ARROW, UP },
{ KEY_DOWN_ARROW, DOWN },
{ KEY_LEFT_ARROW, LEFT },
{ KEY_RIGHT_ARROW, RIGHT },
{ KEY_LEFT_CTRL, BTN_1 },
{ KEY_LEFT_ALT, BTN_2 },
{ ' ', BTN_3 },
{ KEY_LEFT_SHIFT, BTN_4 },
{ 'z', BTN_5 },
{ 'x', BTN_6 },
{ 's', START },
{ 'c', COIN },
{ KEY_RETURN, ENTER },
{ KEY_ESC, ESCAPE },
/* Change these lines to set key bindings for VOL_UP and VOL_DN */
{ 'u', VOL_UP },
{ 'd', VOL_DN },</code></pre>
<ol>
<li>Properly secure pi to case</li>
<li>Properly configure MAME</li>
<li>SSH to picade</li>
<li>Add roms to picade</li>
<li>Add a external port to plugin controllers</li>
<li>scrapping games How?</li>
<li>netplay</li>
<li>setup battery power</li>
<li>Change the art work</li>
<li>Properly install power button</li>
<li><a href="http://picraftbukkit.webs.com/pi-minecraft-server-how-to">minecraft server</a></li>
<li>Install Java and how to run minecraft on PC</li>
<li>Add a HD for PS1 games <a href="https://shop.pimoroni.com/products/sata-hard-drive-to-usb-adapter">sata adapter</a></li>
</ol>
<h2 id="Wiring..." name="Wiring...">Wiring...</h2>
<pre><code>GPIO --- 220Ohm --- +LED- ---> GND</code></pre>
<p>Python</p>
<pre><code>import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(25, GPIO.OUT)
GPIO.output(25, 1)
GPIO.cleanup()
GPIO.setup(22, GPIO.IN)
GPIO.input(22) == bool</code></pre>
<p>ASCIART</p>
<pre><code> +--- 10 kOhm --- GND
|
|
| _-v
GPIO -- 1 kOhm --+---+ +------ 3.3V</code></pre>
<p><a href="https://www.arduino.cc/en/Tutorial/Button">button</a></p>
<pre><code>GND -- 10K Ohm --+---+ SW +--- 5V
|
GPIO----------------+</code></pre>
<p><a href="https://www.arduino.cc/en/Tutorial/JoyStick">joystic</a></p>
Centos7/RHEL7 FirewallD -- the least you need to know
urn:uuid:2f2ae9d7-7322-2985-8669-df16986d5e9a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This post is just a simple hints-tips to get something going with FirewallD without going into too much detail.</p>
<ol>
<li>Checking if you are using <strong>firewalld</strong>:
<ul>
<li>firewall-cmd --state</li>
</ul></li>
<li>Check your zones (needed later when opening ports):
<ul>
<li>firewall-cmd --get-default-zone</li>
<li>firewall-cmd --get-active-zones</li>
</ul></li>
<li>Checking what is active:
<ul>
<li>firewall-cmd --zone=public --list-all</li>
</ul></li>
<li>Opening services:
<ul>
<li>firewall-cmd --zone=public --add-service=http Or alternatively:</li>
<li>firewall-cmd --permanent --zone=public --add-service=http</li>
<li>firewall-cmd --reload Services are defined in /usr/lib/firewalld/services and /etc/firewalld/services.</li>
</ul></li>
<li>Opening ports:
<ul>
<li>firewall-cmd --permanent --zone=public --add-port=443/tcp</li>
<li>firewall-cmd --reload</li>
</ul></li>
</ol>
Raspberry Pi Thin Client
urn:uuid:16f1fce2-3a88-883e-2985-7511202dc265
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Thin Client project want to create a very low price thin client over
Raspberry Pi board! Microsoft RDC, Citrix ICA, VMWare View, OpenNX &
SPICE</p>
<p><a href="http://rpitc.blogspot.nl/">RPITC</a></p>
Raspberry pi notes
urn:uuid:29999dfc-abbe-f629-d1dd-b0c9232c63bf
2024-03-05T00:00:00+01:00
Alejandro Liu
<h2 id="raspberry+pi+shops" name="raspberry+pi+shops">raspberry pi shops</h2>
<ul>
<li>NL based
<ul>
<li><a href="http://www.sossolutions.nl">sos solutions</a></li>
<li><a href="http://www.hackerstore.nl/">hackerstore</a></li>
<li><a href="https://www.antratek.nl/">antratek</a></li>
<li><a href="https://www.kiwi-electronics.nl/">kiwi-electonics</a></li>
</ul></li>
<li>UK based
<ul>
<li><a href="http://thepihut.com/">the pi hut</a></li>
<li><a href="http://www.modmypi.com/">modmypi</a></li>
<li><a href="https://shop.pimoroni.com/">pimorni</a></li>
</ul></li>
<li>International
<ul>
<li><a href="http://mouser.com">mouser</a></li>
<li><a href="http://conrad.nl">conrad</a></li>
</ul></li>
</ul>
<p>Hardware to attach/secure Raspberry Pi boards</p>
<ul>
<li>4 M2 16mm bolts, + nuts, + spacers?
<ul>
<li><a href="https://www.conrad.nl/nl/zeskantmoeren-m25-din-934-kunststof-10-stuks-toolcraft-830405-830405.html">moeren</a></li>
<li><a href="https://www.conrad.nl/nl/toolcraft-zeskantbouten-m25-16-mm-buitenzeskant-inbus-din-933-kunststof-10-stuks-830220.html">bouten</a></li>
<li><a href="https://www.conrad.nl/nl/modelcraft-bec-verlengkabel-208429.html">verleng kabel</a></li>
<li><a href="https://www.conrad.nl/nl/usb-20-verlengkabel-1x-usb-20-stekker-intern-8-polig-1x-usb-20-bus-intern-8-polig-030-m-grijs-vergulde-steekcontacten-ul-gecertificeerd-971778.html">USB Connectors</a></li>
</ul></li>
</ul>
<p>Creating a Read-Only root for Raspbian:</p>
<ul>
<li><a href="https://hallard.me/raspberry-pi-read-only/">read only</a></li>
<li><a href="http://blog.pi3g.com/2014/04/make-raspbian-system-read-only/">ro raspbian</a></li>
<li><a href="http://blog.gegg.us/2014/03/a-raspbian-read-only-root-fs-howto/">ro rootfs</a></li>
</ul>
<p>Hack/cycle something: <a href="https://learn.adafruit.com/raspberry-gear/introduction">rapsberry gear</a></p>
<h2 id="3D+Printing+Cases%3A" name="3D+Printing+Cases%3A">3D Printing Cases:</h2>
<ul>
<li><a href="http://raspberrypi.stackexchange.com/questions/9934/is-there-an-accurate-3d-cad-model-of-the-version-b-board">b-board mode</a></li>
<li><a href="https://i.materialise.com/blog/how-to-design-a-raspberry-pi-case-for-3d-printing">howto</a></li>
<li><a href="https://www.3dhubs.com/">3dhubs</a></li>
</ul>
Replacing Emacs with Atom
urn:uuid:8e0f1c45-0037-a851-b688-462125ecaf1c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2015/atom.png" alt="atom" /></p>
<p>As an old UNIX guy I have been using
<a href="https://www.gnu.org/software/emacs/emacs.html">emacs</a> for years.
So in a way, I am very comfortable with using it and most of keyboard
shortcuts. But, it really is an old animal and I have been thinking
that I should be moving to a more modern replacement to it for quite
a while.</p>
<p>My latest attempt (and the most serious attempt to date) has been
trying <a href="http://atom.io/">atom</a>. <a href="http://atom.io/">Atom</a> is a new
editor from the makers of <a href="https://github.com/">github</a> which claims
to have been inspired by <a href="https://www.gnu.org/software/emacs/emacs.html">emacs</a>
and also supports the latest web technologies.</p>
<p>After using it for some time, I found the following conclusions:</p>
<ol>
<li>I am really used to using CTRL + key to move around. And switching to the dedicated arrow and home/end keys feels like a step backwards. I might be old fashioned, but it really is about keeping your fingers in the keyboard "home" row. Also, while the arrow keys are easy to find, I have trouble with the home/end keys (which apparently I use a lot when programming). Specially because I switch between a laptop and full size keyboard all the time.</li>
<li>I really like the automatic programming style automation in Emacs.</li>
<li>I miss the record macro/execute macro facility of Emacs.</li>
<li>The automatic "(" inserts ")" really annoys me.</li>
<li>Browsing for extensions in "Atom" seems a bit non-intuitive to me.</li>
<li>I am used to using the command line and open the files in a running editor directly from there. I am able to configure emacs to do this, it is not clear to me how to do this with Atom yet.</li>
<li>I don't know why I am so used to the <a href="https://www.gnu.org/software/emacs/emacs.html">Emacs</a> CTRL+S (search) functionality.</li>
</ol>
<p>The following things I really like:</p>
<ol>
<li>Syntax highlighting is quite solid</li>
<li>The project view pane is very useful.</li>
<li>The Markdown preview pane.</li>
</ol>
<p>So up to know, looks promising but I am not convinced. I still using <a href="https://www.gnu.org/software/emacs/emacs.html">emacs</a> specially because it sometimes feel that <a href="http://atom.io/">atom</a> is slow to start.</p>
Online IDEs
urn:uuid:3f5b7394-7bfc-4a1a-ad85-39753085aed8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>If you want to move to the cloud and like to code like me, this is
kinda of a basic necessity.</p>
<p>This applies in particular to Chromebook users.</p>
<p><a href="http://www.chromebookhq.com/five-best-online-ides-making-the-switch-to-a-chromebook/">5 Best online IDEs</a></p>
Lifehacker App Guides
urn:uuid:00c9964c-77c3-091b-a1f1-4e9eb1e4dc43
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>These two hyperlinks from Lifehacker are quite useful:</p>
<ul>
<li><a href="http://lifehacker.com/5825402/the-lifehacker-app-directory-iphone" title="Lifehacker App directory for iPhone">iPhone App Guide</a></li>
<li><a href="http://lifehacker.com/5825401/the-lifehacker-app-directory-android" title="Lifehacker App directory for Android">Android App Guide</a></li>
</ul>
Upload to OpenWRT
urn:uuid:471f09b8-f4b8-9870-b4df-231b98c8fa30
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Base 64 decoding: <code>coreutils-base64</code></p>
<pre><code>#!/usr/local/bin/haserl --upload-limit=4096 --upload-dir=/tmp
content-type: text/html
<html><body>
<form action="<% echo -n $SCRIPT_NAME %>" method=POST enctype="multipart/form-data" >
<input type=file name=uploadfile>
<input type=submit value=GO>
<br>
<% if test -n "$HASERL_uploadfile_path"; then %>
<p>
You uploaded a file named <b><% echo -n $FORM_uploadfile_name %></b>, and it was
temporarily stored on the server as <i><% echo $HASERL_uploadfile_path %></i>. The
file was <% cat $HASERL_uploadfile_path | wc -c %> bytes long.</p>
<% rm -f $HASERL_uploadfile_path %><p>Don't worry, the file has just been deleted
from the web server.</p>
<% else %>
You haven't uploaded a file yet.
<% fi %>
</form>
</body></html></code></pre>
<p><a href="http://haserl.sourceforge.net/manpage.html">haserl man page</a></p>
<p>Uploader tool: <a href="https://curl.haxx.se/docs/httpscripting.html#File_Upload_POST">post</a></p>
<p>Disable/Relocate cgi-bin</p>
organizing notes
urn:uuid:b38446b4-93e4-66a0-3441-bbbb91fb832e
2024-03-05T00:00:00+01:00
Alejandro Liu
<h2 id="My+Documents" name="My+Documents">My Documents</h2>
<p>DOCUMENTS</p>
<ul>
<li>Project Folder
<ul>
<li>old</li>
<li>YYYY</li>
<li>deliverables</li>
<li>clips?</li>
</ul></li>
<li>category folder
<ul>
<li>expenses? expense reports and digital receipts</li>
<li>regs - passwords, registrations, etc...</li>
<li>nice notes - thank you letters, etc.</li>
</ul></li>
<li>Personal folder
<ul>
<li>info or important
health account data, friends contacts, etc</li>
<li>clips</li>
<li>writing - personal writing, notes, letter, drafts,</li>
<li>taxes <year>
<year> folder</year></li>
</ul></li>
<li>logs
<ul>
<li>activity log</li>
<li>travel log</li>
</ul></li>
</ul>
<p>Naming convention:</p>
<initials>-<month><day>-<type>.<ext>
- initials : author initials or source initials
- monthday : 2 digits each
- type : type of document
# TODO Lists
- Work
- Outlook based
- Tickler File
- Must Do List
- E-mail to TODO
- Personal
- Google Tasks
# Weekly Review Steps
NOTE: Clean-up temp folders
* Collect loose paper notes and materials. (business cards, receipts, etc. - put in in basket for processing)
* Get IN to zero
* Empty your head (write down any new projects, action items, etc.)
* Review Action lists (Mark off completed actions & review for reminders of further action steps to capture)
* Review Previous Calendar Data (review for remaining action items, reference information, etc.)
* Review Upcoming Calendar Data
* Review Waiting For List (Records appropriate actions for any needed follow-up & check off received items)
* Review Project and Larger Outcome lists (ensure that at least one kick-start action is in your system for each)
* Review Any relevant checklists
* Review Someday/Maybe List (Check for any projects that may have become active and transfer them to "Projects" & delete items no longer of interest)
* Review "Pending" and Support Files (Browse through all work-in-progress support material to trigger new actions, completions, and waiting-fors)
## Six Level Model for Reviewing Your Own Work
1. current actions
2. current projects
3. areas of responsibility
4. 1-2 year goals
5. 3-5 year vision
6. big picture view
* * *
* projects: clearly defined outcomes and the next
* actions to move them towards closure
* horizontal focus: reminders placed in a trusted system
* that is reviewed regularly
* vertical focus: informal back of the envelope planning
# Task Lists
- @ANYWHERE : Actions that can be done anywhere (rare?)
- @CALLS : Phonecalls
- @ERRANDS : Actions that I can do while going about
- @HOME : Actions that can only be done at home
- @HOME_PC : Actions that can be done at a PC (home PC?) about home.
- @REVIEW : Items for review. Should be only text. Attachments should go to
my Dropbox folder.
- @WAITING_FOR : Tracker items that need to be follow-up later
- @WORK : Actions that can only be done at the office.
- @WORK_PC : Actions that can be done at a PC (work PC) about work.
- @AGENDAS : Notes on what to discuss with different people
- SOMEDAY_MAYBE : Idea parking lot</ext></type></day></month></initials>
Another Markdown Editor
urn:uuid:258fb9c9-f8fc-2221-fd0a-268688a20d58
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This one is GitHubFlavored markdown...</p>
<p><a href="http://jbt.github.io/markdown-editor/">markdown editor</a></p>
Web Links
urn:uuid:b29ed475-b567-083d-892d-5b8bc527ab1d
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Here a few web-links to interesting web apps.</p>
<p>It covers stuff about password security and checking if web sites
are down, etc etc.</p>
<p><img src="/images/2015/ifysfxqtv2dyygl0b09k.jpg" alt="ifysfxqtv2dyygl0b09k" /></p>
<p><a href="http://www.downforeveryoneorjustme.com/">Down For Everyone or Just Me</a>:</p>
<p>If you're getting an error when visiting a certain site, it could be down or something could be wrong on your end. To see which
it is, head to and type in the web site's domain. It'll let you know if it's actually down or whether you need to do a little more
troubleshooting. You can head there quicker by typing in .</p>
<p>If you're curious how fast your internet is for any reason, this is the
site to check. It'll give you both and upload and download speed, so you
can find out if you're getting what you pay for (or if you're just
getting faster speeds than your friends). Just load it up and
click "Begin Test" to get started.</p>
<p><img src="/images/2015/jcnkq3n1jdkg3mtindvt.jpg" alt="jcnkq3n1jdkg3mtindvt" /></p>
<p><a href="http://howsecureismypassword.net/">How Secure Is My Password?</a>:</p>
<p>Does what it says on the tin. Type in a password and it'll tell you how long it would take to crack.</p>
<p><a href="http://whatismyip.org/">What's My IP</a>:</p>
<p>Whether you're setting up a home media server with Subsonic or you just need to SSH into a computer at home, sometimes
you need to know a computer's IP address from outside of your network, and this site will tell you what it is.</p>
<p><a href="http://canyouseeme.org/">Can You See Me</a>: If you're having connection issues with a certain program, like email, IM, or
BitTorrent, it could be because your firewall or ISP is blocking a certain port that program needs. Canyouseeme.org will
let you type in a port and check if it's open- if it isn't, then that
could be the source of your trouble. If it's open, then you know it's something else.</p>
Fiddle Markdown Tool
urn:uuid:ceeb7cf2-ab74-8ed3-85dd-1c67cb077de7
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>For a quick and simple Markdown Preview:</p>
<p><img src="/images/2015/oudpno5sb9dfgfkvpvgw.png" alt="oudpno5sb9dfgfkvpvgw" /></p>
<p><a href="https://fiddle.md/">Fiddle</a></p>
Code Kingdoms
urn:uuid:8916bb8f-f295-f2b8-29f9-b4ad28cda9dc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Code Kingdoms is targeted towards six- to 13-year olds and looks very
much like your everyday puzzle adventure game. Choose an animal, walk
around a kingdom saving animals through puzzles. The difference is
most of the puzzles require kids to use code elements to solve the
puzzles. At first this is through dragging-and-dropping code snippets,
but as they progress, kids will be typing in code themselves.</p>
<p><img src="/images/2015/pugw1qoceliykwmnprbt.png" alt="pugw1qoceliykwmnprbt" /></p>
<p>Besides teaching actual JavaScript through play, Code Kingdoms
also helps kids develop problem-solving skills and the encouragement
to keep pushing on when they're faced with a challenge in the
game-much like programmers often have to push through challenging
walls.</p>
<p><a href="http://codekingdoms.com/">Code Kingdoms</a></p>
Kerberos Client
urn:uuid:e0e789e0-3b60-da9b-d283-a24178312c12
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This simple mini how-to goes over the configuration of a
linux system so it can use a Kerberos Realm server
for authentication.</p>
<ol>
<li>
<p>Make sure you have the pam_krb5 rpm files installed. You can check this by running the <code>rpm -qa | grep pam</code> command and seeing whether the pam_krb5 rpm files are listed. If they aren't, you can typically download them in an update of the Linux or Unix operating system that you are running.</p>
</li>
<li>
<p>Add the line to the "/etc/pam.d/system-auth" part of the auth section of Kerberos. Add it after the "pam_unix.so" line:</p>
<pre><code>auth sufficient /lib/security/pam_krb5.so use_first_pass forwardable</code></pre>
</li>
<li>
<p>Add the line to the "/etc/pam.d/system-auth" part of the password section of Kerberos. Add it after the "pam_unix.so" line:</p>
<pre><code>password sufficient /lib/security/pam_krb5.so use_authtok</code></pre>
</li>
<li>
<p>Add the line to the "/etc/pam.d/system-auth" part of the session section of Kerberos. Add it after the "pam_unix.so" line:</p>
<pre><code>session optional /lib/security/pam_krb5.so</code></pre>
</li>
</ol>
HP Envy 4504 Set-up
urn:uuid:0507c985-d8d8-dcea-52f6-d3194a9ebce0
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>I bought a HP Envy 4504. Overall I am happy with it. This is how
I configure it so I can use with Linux.</p>
<p>This mini howto applies to ArchLinux, void linux and Centos/RedHat distributions.</p>
<h3 id="Installation" name="Installation">Installation</h3>
<p>Archlinux:</p>
<pre><code>cups, hplip, python2, sane</code></pre>
<p>Centos:</p>
<pre><code>cups, hplip, hplip-gui, sane</code></pre>
<p>Some optional dependancies may be needed.</p>
<p>void linux:</p>
<pre><code>hplip-gui</code></pre>
<p>And for scanning, install:</p>
<pre><code>simple-scan and/or xsane</code></pre>
<h3 id="Configuration+Arch+Linux+and+Centos%2FRedHat" name="Configuration+Arch+Linux+and+Centos%2FRedHat">Configuration Arch Linux and Centos/RedHat</h3>
<pre><code>sudo systemctl enable cups
sudo systemctl start cups
sudo hp-setup</code></pre>
<p>hp-setup -i Select "Network", and "Advanced Options -> Manual Discovery" Printer: npr1 PPD File:</p>
<pre><code>/usr/share/ppd/HP/hp-envy_4500_series.ppd</code></pre>
<p>Uncoment <code>hpaio</code> from <code>/etc/sane/dll.conf</code>.</p>
<h3 id="void+linux+configuration" name="void+linux+configuration">void linux configuration</h3>
<p>These are void linux specific settings:</p>
<p>enable cups:</p>
<pre><code>ln -s /etc/sv/cupsd /var/service</code></pre>
<p>Add printer (run with <code>sudo</code>):</p>
<pre><code>print_host=npr1
hp-setup $print_host</code></pre>
<h3 id="Tweaks" name="Tweaks">Tweaks</h3>
<p>Some commands:</p>
<pre><code>lpstat -p
cupsenable printer</code></pre>
<p>Also since it is a WIFI printer no To prevent thisrmally it will go into sleep/power
save mode. This means if you then try to print from cups it will fail
(printer is asleep). Subsequent prints should work but now cupsd has
flagged the printer as paused. To prevent this you should run this
command as root:</p>
<pre><code>lpadmin -p ENVY_4500 -o printer-error-policy=retry-job</code></pre>
<p>More configuration commands:</p>
<ul>
<li>Set default paper size:
<ul>
<li><code>echo a4 > /etc/papersize</code></li>
</ul></li>
</ul>
<hr />
<h2 id="Updates" name="Updates">Updates</h2>
<ul>
<li>2022-11-06: Removed from voidlinux:
<ul>
<li>uncompress PPD file (otherwise it is not recognized)
so that it runs:</li>
<li><strong>removed OBSOLEtE patch</strong></li>
<li>For scanning, uncoment <code>hpaio</code> from <code>/etc/sane/dll.conf</code>.</li>
</ul></li>
<li>2020-03-09 : Removed:
<ul>
<li>To prevent this you should configure
the default <code>ErrorPolicy</code> in <code>/etc/cups/cupsd.conf</code> by adding in the top
scope: "ErrorPolicy retry-job"</li>
<li>References: <a href="https://superuser.com/questions/280396/how-to-resume-cups-printer-from-command-line">superuser.com</a></li>
</ul></li>
<li>2019-02-19 : Added <a href="http://voidlinux.org">void linux</a> instructions.</li>
</ul>
RPMGOT
urn:uuid:5a637554-5e2f-cdc0-0a63-d45fa77d16d2
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://github.com/alejandroliu/rpmgot">Software package download proxy</a></p>
<p><code>rpmgot</code> is a simple/lightweight software package download proxy. It was designed to run on an OpenWRT router with some USB storage. So it is fully implemented as an <code>ash</code> script.</p>
<p>The basic idea has been implemented multiple times. For example refer to this <a href="http://ma.ttwagner.com/lazy-distro-mirrors-with-squid/">article</a> on a <a href="http://www.squid-cache.org/">squid</a> based implementation.</p>
<p>Unlike squid, which once you include all its dependencies can use up over 1MB of space just to install it, this software has very few dependencies.</p>
<p>The idea is for small developers running the same operating system version(s) would benefit from a local mirror of them, but they don't have so many systems that it's actually reasonable for them to run a full mirror, which would entail rsyncing a bunch of content daily, much of which may be packages would never be used.</p>
<p><code>rpmgot</code> implements a <em>lazy</em> mirror something that would appear to its client systems as a full mirror, but would act more as a proxy. When a client installed a particular version of a particular package for the first time, it would go fetch them from a "real" mirror, and then cache it for a long time. Subsequent requests for the same package from the "mirror" would be served from cache.</p>
<p>The RPM files are cached for a very long time. Normally it is an awful, awful idea for proxy servers to do interfere with the <code>Cache-Control / Expires</code> headers that sites serve. But in the case of a mirror, we know that any updates to a package will necessarily bump the version number in the URL. Ergo, we can pretty safely cache RPMs indefinitely.</p>
<p>You can find this in <a href="http://github.com/alejandroliu/rpmgot">Github</a>.</p>
SSL Certificates
urn:uuid:0df847bd-a020-8728-bee2-a5da51f9e6f3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So it is is a more dangerous world out there. You can start securing web sites using self signed certificates. Another option is to:</p>
<ol>
<li>Use CloudFlare. This will use a CF certificate from the CF CDN to the web site, while using a self-signed certificate between the CF CDN to your web server.</li>
<li>Use <a href="https://www.startssl.com/">startssl</a></li>
</ol>
Convert HTML to Markdown
urn:uuid:bf2d755e-cb57-9ea1-c658-a2762ff4a718
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>These web sites convert to Markdown:</p>
<ul>
<li><a href="http://heckyesmarkdown.com/">Mardownifier</a>: Convert the given URL</li>
<li><a href="https://domchristie.github.io/to-markdown/">to-markdown</a>: Convert HTML snippets</li>
<li><a href="http://domchristie.github.io/turndown/">turndown</a></li>
</ul>
Raspberry Pi - Low cost CCTV
urn:uuid:47ebb255-3510-6aea-4534-77a83e17c44a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A good tutorial on creating a low cost surveillance camera using
the raspberry Pi camera module and one of thos fake surveillance
camera things.</p>
<p><img src="/images/2014/FJJOOSJHO7X6PIT.MEDIUM.jpg" alt="FJJOOSJHO7X6PIT.MEDIUM" /></p>
<p><a href="http://www.instructables.com/id/Raspberry-Pi-as-low-cost-HD-surveillance-camera/">Instructables</a> has a good tutorial on creating a low cost surveillance camera.</p>
<p>Essentially makes use of a Pi, the Camera module and fitted into one of those inexpensive fake surveillance cameras.</p>
<p>It uses <a href="http://www.lavrsen.dk/foswiki/bin/view/Motion">motion</a> for the motion detection software.</p>
Raspberry Pi as a Stratum-1 NTP Server
urn:uuid:92b10c83-d540-7b62-cc30-8b1381bbfee9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is something I found:</p>
<ul>
<li><a href="http://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html">http://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html</a></li>
</ul>
<p>Essentially it requires pairing a Raspberry Pi with a
<a href="http://ava.upuaut.net/store/index.php?route=product/product&path=59_60&product_id=95">NTPI Raspberry Pi GPS addon board</a></p>
<p>On the software side of things you need
<a href="http://vanheusden.com/time/rpi_gpio_ntp/">rpi_gpio_ntp</a></p>
<p><img src="/images/2014/Pi-GPS-shield-2013-10-15-1533-44-b.jpg" alt="pi-gps-shield-2013-10-15-1533-44-b" /></p>
Incredible PBX for RasPBX
urn:uuid:17e6879b-9bfc-de24-5f8f-49d9ca8f6538
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a link to <a href="http://nerdvittles.com/?p=8222">IncrediblePBX for RasPBX</a>. Looks like bundles to run Asterisk PBX'es on a Raspberry Pi. Neat.</p>
dev notes 2014
urn:uuid:a93cb988-f676-5928-756a-bffb04eba255
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Replacment for Make and Autoconf:
<a href="https://embedthis.com/makeme/">MakeMe</a></p>
<p>(If you don't have root but have Android 4+ you can use the
command-line program adb from the Android SDK platform tools to make
backups via a desktop computer)</p>
<p><a href="http://www.chromebookhq.com/five-best-online-ides-making-the-switch-to-a-chromebook/">chromebook ides</a></p>
<h2 id="Dev+Tools" name="Dev+Tools">Dev Tools</h2>
<p>Alternative languages:</p>
<ul>
<li>D : better than C, but not over-the-top like C++? Covers only Win and Linux</li>
<li>Vala : Kinda like C# but for Gnome. Covers Win and Linux. (Android maybe through NDK).</li>
<li>Java: Kinda over the top and heavy. Covers Win and Linux. Android yes, but different GUI library. iOS probably yes.</li>
<li>Python: scripting language. Win, Linux. Android maybe... iOS maybe...</li>
<li>Javascript: scripting language. ALL PLATFORMS.</li>
</ul>
<p>Other options:</p>
<ul>
<li>Python with <a href="http://kivy.org/">Kivy</a></li>
<li><a href="http://haxe.org/">Haxe</a></li>
</ul>
<h2 id="Build+Tools" name="Build+Tools">Build Tools</h2>
<ul>
<li>MakeKit - autotools look & feel but lighter</li>
<li><a href="http://www.dervishd.net/libre-software-projects">mobs</a>: autoconf workalike.</li>
</ul>
<h2 id="Resources" name="Resources">Resources</h2>
<ul>
<li><a href="http://www.dervishd.net/libre-software-projects">libre projects</a> :
syslogd in perl, mobom perl modules.</li>
</ul>
<h2 id="My+own+Notes+App" name="My+own+Notes+App">My own Notes App</h2>
<pre><code>JumpNote + OI Notpad
(Background (Tags support)
Sync)
V
Simple Note backend
V
Tags UI
(Filter, modify tags)
V
Task UI
V
Widget</code></pre>
<p>WebApp + Mobile Dev:</p>
<ul>
<li><a href="http://demux.vektorsoft.com/demux/">A Java framework that works on multiple platforms</a></li>
<li><a href="http://asterclick.drclue.net/WBEA.html">Allows for webapps on desktops</a></li>
<li>PhoneGap</li>
<li><a href="http://www.mobilexweb.com/emulators">Test mobile apps on desktop</a></li>
<li>Javascript optimizer:
<ul>
<li><a href="https://developers.google.com/closure/">closure</a></li>
<li><a href="https://github.com/mishoo/UglifyJS">UglifyJS</a></li>
</ul></li>
<li><a href="https://developer.mozilla.org/en/Rhino_JavaScript_Compiler">JS Compiler</a></li>
<li>Java 2 JS Toolkits:
<ul>
<li><a href="http://code.google.com/webtoolkit/">WebToolKit</a></li>
<li><a href="http://j2s.sourceforge.net/">J2S</a></li>
</ul></li>
<li>Python 2 JS Toolkigs:
<a href="http://pyjs.org/">PyJS</a></li>
<li>JS Interpretr for command line:
<ul>
<li><a href="https://developers.google.com/v8/">v8</a></li>
<li><a href="http://en.wikipedia.org/wiki/Nodejs">NodeJS</a></li>
</ul></li>
<li><a href="http://this-voice.org/alchemy/pride.html">Android Alternative IDE</a></li>
</ul>
<p>Documentation around Syncing...</p>
<ul>
<li><a href="http://ericmiles.wordpress.com/2010/09/22/connecting-the-dots-with-android-syncadapter/">sync adapter</a></li>
<li><a href="http://developer.android.com/resources/samples/SampleSyncAdapter/index.html">Sample sync adapter</a></li>
</ul>
<p>Other Notes:</p>
<ul>
<li>Perki replacement that runs on Android.</li>
<li>Use WebKit/PhoneGap + Javascript and HTML5</li>
<li>Markdown library for Javascript</li>
<li>Markdown editor for javscript</li>
<li>TXGR converted to HTML5 Canvas</li>
<li>How do we do background sync?</li>
</ul>
<p>More example code:</p>
<ul>
<li><a href="http://code.google.com/p/jumpnote/">jumpnote</a></li>
<li><a href="http://www.java2s.com/Open-Source/Android/CatalogAndroid.htm">CatalogAndroid</a></li>
</ul>
<p>We want to have it for Android, Linux and Windows.</p>
<ul>
<li><a href="http://libreplanet.org/wiki/Group:Hardware/Howto_have_a_free_android_sdk">Free Android SDK</a></li>
</ul>
<p>We need to research:</p>
<ul>
<li>Alternative to freewrap
<ul>
<li><a href="http://jsmooth.sourceforge.net/">jsmooth</a></li>
<li><a href="http://launch4j.sourceforge.net/">launch4j</a></li>
<li><a href="http://www.thisiscool.com/gcc_mingw.htm">gcc mingw</a></li>
<li><a href="http://vertis.github.com/2007/06/24/native-java-with-gcj-and-swt.html">gcj+swt</a></li>
<li><a href="http://winrun4j.sourceforge.net/">winrun4j</a></li>
</ul></li>
<li>Alternative to Canvas
<ul>
<li><a href="http://www.piccolo2d.org/">piccolo2d</a></li>
<li><a href="http://www.jhotdraw.org/">jhotdraw</a></li>
<li><a href="http://www.manageability.org/blog/stuff/open-source-structured-graphics-libraries-in-java">Contains an overview of options...</a></li>
<li><a href="http://jean-philippe.leboeuf.name/notebook/archives/000315.html">Another overview of options</a></li>
</ul></li>
<li>Which Toolkit to use (SWT, Swing, AWT, etc)</li>
</ul>
<p>A freewrap like tool for python:</p>
<p><a href="http://freecode.com/projects/pyinstaller">pyiinstaller</a></p>
<p>More Android Dev options:</p>
<ul>
<li>PhoneGAP</li>
<li><a href="http://kivy.org/">Python, multi platform</a></li>
<li><a href="https://code.google.com/p/android-python27/w/list">Python on android</a></li>
</ul>
Resizing a Linux RAID
urn:uuid:a676da51-278c-1911-9cf7-961feb4a87b3
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>It is possible to migrate the whole array to larger drives
(e.g. 250 GB to 1 TB) by replacing one by one. In the end the number
of devices will be the same, the data will remain intact, and you will
have more space available to you.</p>
<h4 id="Extending+an+existing+RAID+array" name="Extending+an+existing+RAID+array">Extending an existing RAID array</h4>
<p>In order to increase the usable size of the array, you must increase
the size of all disks in that array. Depending on the size of your
disks, this may take days to complete. It is also important to note
that while the array undergoes the resync process, it is vulnerable
to irrecoverable failure if another drive were to fail. It would (of
course) be a wise idea to completely back up your data before continuing.</p>
<p>First, choose a drive and completely remove it from the array</p>
<pre><code>mdadm -f /dev/md0 /dev/sdd1
mdadm -r /dev/md0 /dev/sdd1</code></pre>
<p>Next, partition the new drive so that you are using the amount of
space you will eventually use on all new disks. For example, if you
are going from 100 GB drives to 250 GB drives, you will want to
partition the new 250 GB drive to use 250 GB, not 100 GB. Also,
remember to set the partition type to <strong>0xDA</strong> - Non-fs data (or
<strong>0xFD</strong>, Linux raid autodetect if you are still using the deprecated
autodetect).</p>
<pre><code>fdisk /dev/sde</code></pre>
<p>Now add the new disk to the array:</p>
<pre><code>mdadm --add /dev/md0 /dev/sde1</code></pre>
<p>Allow the resync to fully complete before continuing. You will now
have to repeat the above steps for <em><strong>each</strong>\</em> disk in your array.
Once all of the drives in your array have been replaced with larger
drives, we can grow the space on the array by issuing:</p>
<pre><code>mdadm --grow /dev/md0 --size=max</code></pre>
<p>The array now represents one disk using all of the new available space.</p>
<p>If the array has a write-intent bitmap, it is strongly recommended that
you remove the bitmap <strong>before</strong> increasing the size of the array.
Failure to observe this precaution can lead to the destruction of the
array if the existing bitmap is insufficiently large, especially if
the increased array size necessitates a change to the bitmap's chunksize.</p>
<pre><code> mdadm --grow /dev/mdX --bitmap none
mdadm --grow /dev/mdX --size max
mdadm --wait /dev/mdX
mdadm --grow /dev/mdX --bitmap internal</code></pre>
<p>If the system relies on the disks in the array for booting the OS
(a common approach is to keep /boot in a RAID 1 array, i.e. md0,
across all the disks in the array) then you might need to manually
reinstall the bootloader on each of the new disks, because the array
synchronization does not sync the MBR. This should be done directly
on each disk and not on the array itself (/dev/mdX), and is safe to
do with the array online. For example, to re-install GRUB on the
first disk:</p>
<pre><code>grub
grub> root (hd0,0)
grub> setup (hd0)</code></pre>
<p>You need to repeat this for each new disk that should contain the
bootloader. If you forget to do so, and find that you cannot boot
the system after replacing all the disks, you can boot from a rescue
CD/DVD/USB in order to install the bootloader as instructed above.</p>
<h4 id="Extending+the+filesystem" name="Extending+the+filesystem">Extending the filesystem</h4>
<p>Now that you have expanded the underlying partition, you must now
resize your filesystem to take advantage of it.</p>
<p>You may want to perform an fsck on the file system first to make sure
there are no underlying issues before attempting to resize the file system</p>
<pre><code> fsck /dev/md0</code></pre>
<p>For an ext2/ext3 filesystem:</p>
<pre><code>resize2fs /dev/md0</code></pre>
<p>For a reiserfs filesystem:</p>
<pre><code>resize_reiserfs /dev/md0</code></pre>
<p>Please see filesystem documentation for other filesystems.</p>
<h4 id="LVM%3A+Growing+the+PV" name="LVM%3A+Growing+the+PV">LVM: Growing the PV</h4>
<p>LVM (logical volume manager) abstracts a logical volume
(that a filesystem sits on) from the physical disk. If you are used
to LVM then you are likely used to growing LVs (logical volumes), but
what we grow here is the PV (physical volume) that sits on the
<em>md</em> device (RAID array).</p>
<p>For further LVM documentation, please see the
<a href="http://tldp.org/HOWTO/LVM-HOWTO/">Linux LVM HOWTO</a></p>
<p>Growing the physical volume is trivial:</p>
<pre><code>pvresize /dev/md0</code></pre>
<p>A before-and-after example is:</p>
<pre><code>root@barcelona:~# pvdisplay
\-\-\- Physical volume ---
PV Name /dev/md0
VG Name server1_vg
PV Size 931.01 GB / not usable 558.43 GB
Allocatable yes
PE Size (KByte) 4096
Total PE 95379
Free PE 42849
Allocated PE 52530
PV UUID BV0mGK-FRtQ-KTLv-aW3I-TllW-Pkiz-3yVPd1
root@barcelona:~# pvresize /dev/md0
Physical volume "/dev/md0" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
root@barcelona:~# pvdisplay
\-\-\- Physical volume ---
PV Name /dev/md0
VG Name server1_vg
PV Size 931.01 GB / not usable 1.19 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 238337
Free PE 185807
Allocated PE 52530
PV UUID BV0mGK-FRtQ-KTLv-aW3I-TllW-Pkiz-3yVPd1</code></pre>
<p>The above is the PV part after md0 was grown from ~400GB to ~930GB
(a 400GB disk to a 1TB disk). Note the <em>PV Size</em> descriptions before
and after.</p>
<p>Once the PV has been grown (and hence the size of the VG, volume
group, will have increased), you can increase the size of an LV
(logical volume), and then finally the filesystem, eg:</p>
<pre><code>lvextend -L +50G -n home\_lv server1\_vg
resize2fs /dev/server1\_vg/home\_lv</code></pre>
<p>The above grows the _home<em>lv</em> logical volume in the _server1<em>vg</em>
volume group by 50GB. It then grows the ext2/ext3 filesystem on that
LV to the full size of the LV, as per <em>Extending the filesystem</em> above.</p>
<p>Source: <a href="https://raid.wiki.kernel.org/index.php/Growing" title="Raid Wiki">https://raid.wiki.kernel.org/index.php/Growing</a></p>
Wi-Fi Sd Cards
urn:uuid:8d87e03c-a7b6-881e-fd59-a4fb0b063078
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>My latest weekend project. Making a normal digital camera WIFI enabled.</p>
<p><img src="/images/2014/list_WIFISD.png" alt="list_WIFISD.png" /></p>
<p>With the <a href="http://www.transcend-info.com/products/Catlist.asp?FldNo=24">Transcend Wi-Fi SD Card</a> you can convert any digital camera into a Wi-Fi enable camera.</p>
<p>What I did here is to set it up so that it would automatically upload photos whenever I turn the camera on while at home.</p>
<p>The nice thing about this camera is that it runs a fully functional Linux environment within the card. The manufacturer was also nice enough to give you the opportunity to customize the card by running arbitrary shell scripts from the SD card itself.</p>
<p>My code is in <a href="https://github.com/alejandroliu/sdwifi">github</a>.</p>
Raspberry Pi Weekend project
urn:uuid:1194f112-8151-1c5c-b8b6-58c5c7fdff47
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So finally took the time to try out a Raspberry Pi. For this weekend project wanted to do something <em>relatively</em> simple.
Essentially, I wanted to recreate/enhance the functionality of a
<a href="http://www.tp-link.com/en/products/details/?model=TL-WR702N">TL-WR702N</a>.</p>
<p><img src="/images/2014/TL-WR720N-01.jpg" alt="tl-wr702n-01" /></p>
<p>The TL-WR702N Nano Router is a neat device but being closed, can not be customized to what I wanted. It can be used in
the following modes:</p>
<ul>
<li>AP</li>
<li>Client</li>
<li>Repeater</li>
<li>Router</li>
<li>Bridge</li>
</ul>
<p>Specifically I was interested in the bridge mode. However, rather
than bridging from one SSID to another SSID, I wanted to <em>route/nat</em>
between the two. So in theory, should be simple to implement (as the
hardware should have all the necessary components) but not allowed by
the software.</p>
<h1>Enter the Raspberry Pi.</h1>
<p>So the Pi, is a mini computer that can be loaded with any software you want. The B-model, has a built-in Ethernet and USB ports to plug-in <em>two</em> WIFI adaptors. For this functionality I am using the following:</p>
<ul>
<li>Raspberry-Pi Model-B
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/6f/Raspberry_Pi_B%2B_top.jpg/300px-Raspberry_Pi_B%2B_top.jpg" alt="Raspberry Pi" /></li>
<li>WIFI stick (2 units)
<img src="/images/2014/993655_LB_00_FB.EPS_250.jpg" alt="WIFI" /></li>
</ul>
<p>For the software I am using:</p>
<ul>
<li><a href="https://github.com/gamaral/rpi-buildroot">Raspberry Pi buildroot</a></li>
<li><a href="http://www.realtek.com.tw/downloads/downloadsView.aspx?Langid=1&PNid=21&PFid=48&Level=5&Conn=4&DownTypeID=3&GetDown=false&Downloads=true">hostapd-rtl8192cu from Realtek</a>
You need to get the RTL8188CUS package for Linux.</li>
</ul>
<p>So you need two WIFI adaptors, as one will not work as master and slave at the same time. Essentially, one WIFI interface will act as the client WIFI station. The other WIFI interface acts as a WIFI hotspot. I chose to use <code>buildroot</code> instead of a normal Linux distro like <a href="http://www.raspbian.org/">Raspbian</a> or <a href="http://archlinuxarm.org/platforms/armv6/raspberry-pi">Arch Linux Arm</a> because I wanted to run it as an embedded system. Normal Linux distro's are supposed to be properly <em>shutdown</em> and would complain when you simply yank the power cord. The <code>buildroot</code> image I have is customized so that the file system is always mounted read-only. It will switch to read-write only to write persistent data and then switch back to read-only. The normal <code>hostapd</code> that comes with <code>buildroot</code> is the normal open source project and does not come with the <code>rtl8192cu</code> driver. You need to download and build the <code>Realtek</code> version. For this to work, I did the following:</p>
<ol>
<li>
<p>Create a start-up scripts that set the whole thing up.</p>
<ul>
<li>sets-up the filesystem</li>
<li>starts <code>syslog</code>, <code>sshd</code>, <code>rngd</code></li>
<li>sets-up <code>eth0</code> and <code>wlan0</code> to be configured by <code>ifplugd</code></li>
<li>starts <code>wpa_supplicant</code> on <code>wlan0</code></li>
<li>starts <code>httpd</code></li>
<li>start and configure <code>wlan1</code> as an Access Point.</li>
<li>start and configure <code>dnsmasq</code> for DNS and DHCP.</li>
</ul>
</li>
<li>Wrote a small web UI to configure the WIFI client.</li>
</ol>
<p>All this stuff can be found in <a href="https://github.com/alejandroliu/harpy">github</a>.</p>
DVD archiving
urn:uuid:cdba3c01-4834-013a-d96e-318784f11cfe
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is my simple procedure for backing up my DVD movies:</p>
<p>Examine the DVD:</p>
<pre><code>dvdbackup -i /dev/sr0 -I</code></pre>
<p>Create a full backup:</p>
<pre><code>dvdbackup -i /dev/dvd -o ~ -M</code></pre>
<p>Creating an ISO:</p>
<pre><code>mkisofs -dvd-video -udf -o ~/dvd.iso ~/movie_name</code></pre>
<p>Testing the newly created ISO:</p>
<pre><code>mplayer dvd:// -dvd-device ~/dvd.iso</code></pre>
Private vs. Personal
urn:uuid:137b8501-9e65-bb1c-9049-051938e710dd
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In Microsoft Outlook has the option to tag e-mails with a sensitivity tag. Technically this is fairly meaningless. However sometimes I like to use them. </p>
<p>The confidential tag is quite self explanatory. I always confuse what is the difference between private and personal. So here is one possibility...</p>
<ul>
<li>Personal information are things like preferences, political association, likes and dislikes.</li>
<li>Private information are things like bank account numbers. Stuff that you probably would like to keep secret.</li>
</ul>
Cleaning-up Outlook Calendar
urn:uuid:a0c69720-fe01-d50f-6282-2ee04427d1a9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a procedure I go through at the end of the year
to clean-up my Outlook Calendar. Usually the Outlook
Calendar gets full of junk over time. So this is something
worth doing on a regular basis.</p>
<h2>Procedure for Outlook 2007</h2>
<ol>
<li>Backup calendar folder</li>
<li>Select default calendar</li>
<li>Switch view to <code>Inactive Appointments (non-recurrent)</code></li>
<li>Delete appointments</li>
<li>Switch view to <code>Inactive Appointments (recurrent)</code></li>
<li>Delete appointments</li>
</ol>
<h2>Procedures for Previous versions of Outlook</h2>
<p>This is my procedure for cleaning my Outlook calendar from old appointments and other assorted outdated stuff:</p>
<ol>
<li>Backup your calendar folder (just in case)</li>
<li>Create a temporary Calendar folder</li>
<li>Select your default Calendar</li>
<li>Switch to <code>All Appointments</code> view:
<ul>
<li>View -> Current View -> All Appointments</li>
</ul></li>
<li>Select all the appointments and <strong>move</strong> them to the temporary Calendar folder</li>
<li>Select the temporary Calendar folder</li>
<li>Switch to <code>Active Appointments</code> vew:
<ul>
<li>View -> Current View -> Active Appointments</li>
</ul></li>
<li>Select all the visible appointments and <strong>move</strong> the m back to the default Calendar folder.</li>
<li>You can now dispose of the temporary folder (or backed it up for reference.</li>
</ol>
Chrome Kerberos Authentication
urn:uuid:5676154d-bd90-f308-6d1f-14e70db7a6cb
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>To config chrome to use kerberos authentication you need to start the application the following parameter:</p>
<ul>
<li>auth-server-whitelist - Allowed FQDN - Set the FQDN of the IdP Server. Example:
chrome --auth-server-whitelist="*aai-logon.domain-a.com"</li>
<li><p>auth-negotiate-delegate-whitelist - For which FQDN credential delegation will be allowed.</p></li>
</ul>
<p>References:
</p>
Deploying Chrome Extensions
urn:uuid:80ebca9c-aab4-5c53-622c-c5bfd38688d5
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The following links outline how to deploy Chrome extensions in a enterprise manner:</p>
<ul>
<li><a href="https://support.google.com/chrome/a/answer/188453?hl=en">Installing Chrome Extensions</a></li>
<li><a href="http://developer.chrome.com/extensions/external_extensions.html">Other Deployment Options</a></li>
<li><a href="http://www.guidingtech.com/14503/force-install-extensions-scripts-chrome-not-on-web-store/">Force Installing Extensions</a></li>
</ul>
My Must Have Android Apps
urn:uuid:cf0fe745-7c80-61bf-a168-9d13fc172ae0
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a list of my favorite Android Apps:</p>
<h2 id="Essentials" name="Essentials">Essentials</h2>
<ul>
<li>Barcode Scanner - <a href="https://play.google.com/store/apps/details?id=com.google.zxing.client.android">Play Store</a> <a href="https://f-droid.org/repository/browse/?fdid=com.google.zxing.client.android">F-Droid</a></li>
<li>Ghost Commander - <a href="https://f-droid.org/repository/browse/?fdid=com.ghostsq.commander">F-Droid</a></li>
<li><a href="https://f-droid.org/">F-Droid</a> Alternative Application Manager. Usually Open source stuff with significantly less crap ware and ads.</li>
</ul>
<h2 id="Productivity" name="Productivity">Productivity</h2>
<ul>
<li>WordPress - <a href="http://market.android.com/details?id=org.wordpress.android">PlayS tore</a></li>
<li>KeePassDroid - <a href="https://f-droid.org/repository/browse/?fdid=com.android.keepass">f-droid</a></li>
<li>GoTasks - <a href="https://play.google.com/store/apps/details?id=com.mile.android.gotasks">Play Store</a></li>
<li>Dropbox - <a href="https://play.google.com/store/apps/details?id=com.dropbox.android">Play Store</a></li>
<li>Quickoffice - <a href="https://play.google.com/store/apps/details?id=com.quickoffice.android">Play Store</a></li>
<li>SimpleNote - <a href="https://play.google.com/store/apps/details?id=com.automattic.simplenote">Play Store</a></li>
</ul>
<h2 id="Social" name="Social">Social</h2>
<ul>
<li>Facebook - <a href="https://play.google.com/store/apps/details?id=com.facebook.katana">Play Store</a></li>
<li>Facebook Messenger <a href="https://play.google.com/store/apps/details?id=com.facebook.orca">Play Store</a></li>
<li>LinkedIn <a href="https://play.google.com/store/apps/details?id=com.linkedin.android">Play Store</a></li>
<li>Skype - <a href="https://play.google.com/store/apps/details?id=com.skype.raider">Play store</a></li>
</ul>
<h2 id="Travel" name="Travel">Travel</h2>
<ul>
<li>KLM - <a href="https://play.google.com/store/apps/details?id=com.afklm.mobile.android.gomobile.klm">Play Store</a></li>
<li>My Tracks - <a href="https://play.google.com/store/apps/details?id=com.google.android.maps.mytracks">Play Store</a></li>
<li><em>TEST</em> Wikivoyage offline - Travel guide. <a href="https://f-droid.org/repository/browse/?fdid=org.github.OxygenGuide">F-Droid</a></li>
</ul>
<h2 id="Tools" name="Tools">Tools</h2>
<ul>
<li>KPN HotSpots - <a href="https://play.google.com/store/apps/details?id=nl.kpn.hotspot">Play Store</a></li>
<li>Timer - <a href="https://f-droid.org/repository/browse/?fdid=org.dpadgett.timer">F-Droid</a></li>
<li>Yahoo Weather - <a href="https://play.google.com/store/apps/details?id=com.yahoo.mobile.client.android.weather">Play Store</a></li>
</ul>
<h2 id="Special+use" name="Special+use">Special use</h2>
<ul>
<li>Searchlight - Flashlight App. <a href="https://f-droid.org/repository/browse/?fdid=com.scottmain.android.searchlight">F-Droid</a></li>
<li>Floating Image - Photo frame <a href="https://f-droid.org/repository/browse/?fdid=dk.nindroid.rss">F-Droid</a></li>
<li>Worldclock - <a href="https://f-droid.org/repository/browse/?fdid=com.irahul.worldclock">F-Droid</a></li>
</ul>
<h2 id="Diagnostics" name="Diagnostics">Diagnostics</h2>
<ul>
<li><em>TEST</em> List my apps - <a href="https://f-droid.org/repository/browse/?fdid=de.onyxbits.listmyapps">F-Droid</a></li>
<li><em>TEST</em> List Apps - <a href="https://f-droid.org/repository/browse/?fdid=net.sourceforge.andsys">F-Droid</a></li>
<li><em>TEST</em> Internet Call settings - <a href="https://f-droid.org/repository/browse/?fdid=eu.siebeck.sipswitch">F-Droid</a></li>
</ul>
<h2 id="Root+stuff" name="Root+stuff">Root stuff</h2>
<ul>
<li>Root Verifier - <a href="https://f-droid.org/repository/browse/?fdid=com.abcdjdj.rootverifier">F-Droid</a></li>
<li>No Frills CPU Control - <a href="https://f-droid.org/repository/browse/?fdid=it.sineo.android.noFrillsCPUClassic">F-Droid</a></li>
<li>Performance Control - <a href="https://f-droid.org/repository/browse/?fdid=com.brewcrewfoo.performance">F-Droid</a></li>
<li>oandbackup - <a href="https://f-droid.org/repository/browse/?fdid=dk.jens.backup">F-Droid</a></li>
</ul>
<h2 id="Premium" name="Premium">Premium</h2>
<ul>
<li>
<p>GPS Test Plus - <a href="http://market.android.com/details?id=com.chartcross.gpstestplus">Play Store</a></p>
</li>
<li>
<p><a href="https://play.google.com/store/apps/details?id=com.access_company.graffiti_pro">Graffiti Pro for Android</a></p>
</li>
<li>
<p><em>TEST</em> Signal Booster - <a href="http://market.android.com/details?id=com.s4bb.signalbooster">Play Store</a></p>
</li>
</ul>
<h2 id="Kids" name="Kids">Kids</h2>
<ul>
<li>PlusMinusTimesDivide - <a href="https://f-droid.org/repository/browse/?fdid=eu.lavarde.pmtd">F-Droid</a></li>
<li>MidiSheetMusic - <a href="https://f-droid.org/repository/browse/?fdid=com.midisheetmusic">F-Droid</a></li>
<li>Learn Music Notes - <a href="https://f-droid.org/repository/browse/?fdid=net.fercanet.LNM">F-Droid</a></li>
</ul>
<h2 id="Stock+Experience+and+Alternatives" name="Stock+Experience+and+Alternatives">Stock Experience and Alternatives</h2>
<p>Used to replace crapware for something closer to the Android experience or to complement incomplete ROMs.</p>
<ul>
<li>
<p>Holo Locker <a href="https://play.google.com/store/apps/details?id=com.mobint.locker">Play Store</a></p>
</li>
<li>
<p>Holo Launcher <a href="https://play.google.com/store/apps/details?id=com.mobint.hololauncher">Play Store</a></p>
</li>
<li>
<p>Contacts+ <a href="https://play.google.com/store/apps/details?id=com.contapps.android">Play Store</a></p>
</li>
<li>
<p>AOSP Calendar - <a href="https://f-droid.org/repository/browse/?fdid=org.sufficientlysecure.standalonecalendar">F-Droid</a></p>
</li>
<li>
<p>Stock Music Player <a href="https://f-droid.org/repository/browse/?fdid=com.android.music">F-Droid</a></p>
</li>
<li>
<p>Launcher3 <a href="https://f-droid.org/repository/browse/?fdid=com.android.launcher3">F-Droid</a></p>
</li>
<li>
<ul>
<li>
<ul>
<li>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2 id="Evaluate" name="Evaluate">Evaluate</h2>
<h3 id="RSS+reader" name="RSS+reader">RSS reader</h3>
<ul>
<li>Tiny Tiny RSS - <a href="http://market.android.com/details?id=org.fox.ttrss">Play Store</a></li>
<li>TTRSS-Reader - <a href="https://f-droid.org/repository/browse/?fdid=org.ttrssreader">F-Droid</a></li>
</ul>
<h3 id="Drawing" name="Drawing">Drawing</h3>
<ul>
<li>Markers - <a href="https://f-droid.org/repository/browse/?fdid=org.dsandler.apps.markers">F-Droid</a></li>
</ul>
<h3 id="Text+editors" name="Text+editors">Text editors</h3>
<ul>
<li>TED - <a href="https://f-droid.org/repository/browse/?fdid=fr.xgouchet.texteditor">F-Droid</a></li>
<li>Text Edit - <a href="https://f-droid.org/repository/browse/?fdid=org.paulmach.textedit">F-Droid</a></li>
<li>Turbo Editor - <a href="https://f-droid.org/repository/browse/?fdid=com.vmihalachi.turboeditor">F-Droid</a></li>
</ul>
<h3 id="Readers" name="Readers">Readers</h3>
<p>need to support: Epub, Chm, Pdf, CBR/CBZ reader</p>
<ul>
<li>Page Turner - <a href="http://www.pageturner-reader.org/for-readers/features/">Web site</a> EPUB (Does page location synchronisation)</li>
<li>CoolReader: <a href="https://f-droid.org/repository/browse/?fdid=org.coolreader">F-Droid</a> epub, chm fb2, txt, rtf,tcr, html</li>
<li>APV PDF Viewer: <a href="https://f-droid.org/repository/browse/?fdid=cx.hell.android.pdfview">F-Droid</a> pdf</li>
<li>Document Viewer: <a href="https://f-droid.org/repository/browse/?fdid=org.sufficientlysecure.viewer">F-Droid</a> pdf, cbz djvu, xps,fb2</li>
<li>VuDroid: <a href="https://f-droid.org/repository/browse/?fdid=org.vudroid">F-Droid</a> pdf, djvu</li>
<li>ACV: <a href="https://f-droid.org/repository/browse/?fdid=net.androidcomics.acv">F-Droid</a> cbz, jpeg, png, bmp, folders</li>
</ul>
<h3 id="Multimedia" name="Multimedia">Multimedia</h3>
<ul>
<li>XBMC Remote - <a href="https://f-droid.org/repository/browse/?fdid=org.xbmc.android.remote">F-Droid</a></li>
<li>UPNP Player - <a href="https://f-droid.org/repository/browse/?fdid=de.yaacc">F-Droid</a></li>
<li>AMPlayer - For an <a href="https://github.com/ampache/ampache">Ampache server</a> <a href="https://f-droid.org/repository/browse/?fdid=com.orphan.amplayer">F-Droid</a></li>
<li>ServeStream - <a href="https://f-droid.org/repository/browse/?fdid=net.sourceforge.servestream">F-Droid</a></li>
</ul>
<h3 id="Misc" name="Misc">Misc</h3>
<ul>
<li>Box - <a href="https://play.google.com/store/apps/details?id=com.box.android">Play Store</a></li>
<li>Bump - <a href="https://play.google.com/store/apps/details?id=com.bumptech.bumpga">Play Store</a></li>
<li>Daily Money - <a href="https://f-droid.org/repository/browse/?fdid=com.bottleworks.dailymoney">F-Droid</a></li>
<li>HotSpot Login - <a href="https://f-droid.org/repository/browse/?fdid=net.sf.andhsli.hotspotlogin">F-Droid</a></li>
<li>Linphone - <a href="https://f-droid.org/repository/browse/?fdid=org.linphone">F-Droid</a></li>
<li>LinConnect - Send notifications to desktop <a href="https://f-droid.org/repository/browse/?fdid=com.willhauck.linconnectclient">F-Droid</a></li>
<li>Serval Mesh - <a href="https://f-droid.org/repository/browse/?fdid=org.servalproject">F-Droid</a></li>
<li>Read-it later poche - <a href="https://f-droid.org/repository/browse/?fdid=fr.gaulupeau.apps.Poche">F-Droid</a></li>
<li>SSH client & Terminal Emulator - <a href="https://f-droid.org/repository/browse/?fdid=sk.vx.connectbot">F-Droid</a></li>
<li>VNC client - <a href="https://f-droid.org/repository/browse/?fdcategory=System&fdid=android.androidVNC">F-Droid</a></li>
<li>Omnidroid automation - <a href="https://f-droid.org/repository/browse/?fdid=edu.nyu.cs.omnidroid.app">F-Droid</a></li>
<li>Wifi Analyzer - <a href="https://play.google.com/store/apps/details?id=com.farproc.wifi.analyzer">Play Store</a></li>
<li>Solitaire - <a href="https://play.google.com/store/apps/details?id=com.kmagic.solitaire">Play Store</a> <a href="https://f-droid.org/repository/browse/?fdid=com.kmagic.solitaire">F-Droid</a></li>
<li>ReGalAndroid - Client for G2/G3/Meanlto Gallery and Pwigo. <a href="https://f-droid.org/repository/browse/?fdid=net.dahanne.android.regalandroid">F-Droid</a></li>
<li>DroidFish - <a href="https://f-droid.org/repository/browse/?fdid=org.petero.droidfish">F-Droid</a></li>
</ul>
<h2 id="Evaluate+for+Group+Contacts%2FCalendar" name="Evaluate+for+Group+Contacts%2FCalendar">Evaluate for Group Contacts/Calendar</h2>
<ul>
<li>aCal - <a href="https://f-droid.org/repository/browse/?fdid=com.morphoss.acal">F-Droid</a></li>
<li>CalDAV Sync Adapter - <a href="https://f-droid.org/repository/browse/?fdid=org.gege.caldavsyncadapter">F-Droid</a></li>
<li>DAVdroid - <a href="https://f-droid.org/repository/browse/?ffdid=at.bitfire.davdroid">F-Droid</a></li>
<li>Kolab Client- Dev Preview <a href="https://f-droid.org/repository/browse/?fdid=at.dasz.KolabDroid">F-Droid</a></li>
</ul>
<h2 id="Wishlist%3F" name="Wishlist%3F">Wishlist?</h2>
<ul>
<li>Tetris</li>
<li>Sudoku</li>
<li>RPGs?</li>
<li>Tricorder</li>
<li>Push to talk</li>
<li>Emulators</li>
<li>Android remote control</li>
<li>wifi talkie or search 4 talkie</li>
</ul>
<p>Alternative Keyboards</p>
<ul>
<li><a href="https://f-droid.org/repository/browse/?fdid=de.onyxbits.remotekeyboard">Remote Keyboard</a></li>
</ul>
<p>For older Android versions:</p>
<ul>
<li><a href="https://f-droid.org/repository/browse/?fdid=com.appengine.paranoid_android.lost">Contact Owner</a></li>
<li><a href="https://f-droid.org/repository/browse/?fdid=net.szym.barnacle">Barnacle Wifi Tether</a></li>
<li><a href="https://play.google.com/store/apps/details?id=com.handcent.nextsms">Handcent SMS</a></li>
</ul>
<p>Other things to check out:</p>
<ul>
<li><a href="http://freecode.com/projects/wifix-lite">Fix WIFI</a></li>
<li><a href="http://readwrite.com/2012/05/23/5-push-to-talk-apps-that-turn-your-smartphone-into-a-walkie-talkie">Push to talk</a></li>
<li><a href="http://freecode.com/projects/simple-mtpfs">MTPFS?</a></li>
<li><a href="http://freecode.com/projects/night-time-display">Clock</a></li>
<li><a href="http://freecode.com/projects/copy-sync-paste">Sync clipboards</a></li>
<li><a href="http://freecode.com/projects/mo-da-browser">HTML5 client</a></li>
<li><a href="http://freecode.com/projects/remote-keyboard">Remote Kbd</a></li>
<li><a href="http://www.phonearena.com/news/How-to-make-your-Android-phones-notifications-appear-on-your-computer-desktop_id50461">Send Notifications to Desktop</a></li>
</ul>
wp-cron and cron
urn:uuid:f066674e-ad88-bc3c-26b3-af151e96e1b8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Normal WordPress operation has a cron like functionality that runs scheduled tasks as users visit the blog.</p>
<p>It is possible to replace this with a standalone cron (like UNIX cron).</p>
<p>To disable the "webcron" (i.e. trigerring tasks as URLs are visited) add to your <code>wp-config.php</code> the following:</p>
<pre><code> define('DISABLE_WP_CRON', true);
</code></pre>
<p>Then call this from cron:</p>
<pre><code> curl http://example.com/wp-cron.php
</code></pre>
<p>Optionally you could call <code>wp-cron.php</code> using the <code>php-cli</code> executable.</p>
Using wget with given IP/vhost
urn:uuid:0a8409f2-6398-6a2e-b9b9-afbc94fd10f4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is one neat trick. For vhosts you can connect with an IP yet provide the right host name with the following:</p>
<pre><code> wget http://1.1.1.1/ --header 'Host: www.example.com'
</code></pre>
Using a NAS200 as a Print server
urn:uuid:7db2b820-d4d2-1534-cf33-5d74cc0eb60e
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Last weekend I had a small weekend project to move my All-In-One Printer/Scanner from my Xen host server to a spare NAS200 I had lying around. Since the NAS200 has a i486 compatible CPU, and I had been able to run a CentOS 5 distro before, I figure it would make a good server with low power consumption.</p>
<p><img src="/images/2013/linksys-nas200.jpg" alt="nas200" /></p>
<p>For that I updated my <a href="http://nascc.sf.net">NASCC firmware</a> so that it would boot a USB key, and update my CentOS image creation <a href="https://sourceforge.net/p/nascc/wiki/centos/">script</a>. This worked well, I was able to boot CentOS without <em>that much</em> effort altogether.</p>
<p>I myself have an <a href="http://www.cnet.com.au/epson-stylus-cx5500-339283304.htm">Epson Stylus CX5500</a> which unfortunately only comes with <a href="http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX">binary drivers</a>. This was not much of a problem since the NAS200 has a i486 compatible CPU. I find this is relatively unique among different NAS models.</p>
<p>Alas, the performance was quite disappointing. I should be used to the NAS200 underperforming. But really, this was truly sad. I did not bother to test the printing, but I did try scanning with it. Running <code>scanimage</code> to scan a single page was taking over 15 minutes before I hit <code>Ctrl+C</code>.</p>
<p>It was an idea, but the results were so sub par. The only take-aways of this are:</p>
<ul>
<li>I was able to run open source as well as binary blobs on a NAS200 relatively easily.</li>
<li>I was able to use CentOS5 pretty much out-of-the box. No recompiles required. Did notice though that <code>cups</code> would seg-fault. My guess is that the i386 package some how got some i686 optimizations on it.</li>
<li>My <a href="https://sourceforge.net/projects/nascc/files/LEC/">Linux Ethernet Console</a> made a very good network console. I was able to troubleshoot some very early boot problems with it.</li>
<li>NAS200 performance for scanning was abysmal.</li>
</ul>
UNIX find with dates
urn:uuid:67037a00-4e3f-97e3-5979-017c50d7fadc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><code>-atime/-ctime/-mtime</code> the last time a files's <em>access time</em>, <em>file status</em> and <em>modification time</em>, measured in days or minutes. Time interval in options <code>-ctime</code>, <code>-mtime</code> and <code>-atime</code> is an integer with optional sign.</p>
<ul>
<li><em>n</em>: If the integer <em>n</em> does not have sign this means exactly <em>n</em> days ago, <code>0</code> means today.</li>
<li><em>+n</em>: if it has <code>plus</code> sing, then it means <em>more then <strong>n</strong> days ago</em>, or older then <em>n</em>,</li>
<li><em>-n</em>: if it has the <code>minus</code> sign, then it means <em>less than <strong>n</strong> days ago (-n)</em>, or younger then <em>n</em>. It's evident that <code>-1</code> and <code>0</code> are the same and both mean <em>today</em>.</li>
</ul>
<h3 id="Examples%3A" name="Examples%3A">Examples:</h3>
<ul>
<li>
<p>Find everything in your home directory modified in the last 24 hours: <code>$ find $HOME -mtime 0</code></p>
</li>
<li>
<p>Find everything in your home directory modified in the last 7 days: <code>$ find $HOME -mtime -7</code></p>
</li>
<li>
<p>Find everything in your home directory that have <strong>NOT</strong> been modified in the last year: <code>$ find $HOME -mtime +365</code></p>
</li>
<li>
<p>To find html files that have been modified in the last seven days, I can use -mtime with the argument -7 (include the hyphen): <code>$ find . -mtime -7 -name "*.html" -print</code></p>
</li>
</ul>
<p>If you use the number <code>7</code> (without a hyphen), find will match only html files that were modified exactly seven days ago:</p>
<pre><code> `$ find . -mtime 7 -name "*.html" -print`
</code></pre>
<ul>
<li>
<p>To find those html files that I haven't touched for at least 7 days, I use <code>+7</code>:</p>
<p><code>$ find . -mtime +7 -name "*.html" -print</code></p>
</li>
</ul>
Enable local file caching for NFS share on Linux
urn:uuid:0fc9b10a-12fa-add2-891c-d46c2f210da8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>In Linux, there is a caching filesystem called <code>FS-Cache</code> which enables
file caching for network file systems such as NFS. <code>FS-Cache</code> is built
into the Linux kernel 2.6.30 and higher. In order for <code>FS-Cache</code> to
operate, it needs cache back-end which provides actual storage for
caching. One such cache back-end is <code>cachefiles</code>. Therefore, once you
set up <code>cachefiles</code>, it will automatically enable file caching for NFS shares.</p>
<h2 id="Requirements" name="Requirements">Requirements</h2>
<p>One requirement for setting up <code>cachefiles</code> is that local filesystem support user-defined extended file attributes (i.e., <code>xattr</code>), because <code>cachefiles</code> use <code>xattr</code> to store extra information for cache maintenance. If your local filesystem is ext4-type, you don't need to worry about this since <code>xattr</code> is enabled in ext4 by default. However, if you are using ext3 filesystem, then you need to mount the local filesystem with "user_xattr" option. To do so, edit /etc/mtab to add "user_xattr" mount option to the disk partition that will be used by <code>cachefiles</code> for file caching. For example, assuming that /dev/hda1 is such a partition:</p>
<hr />
<pre><code>/dev/hda1 / ext3 rw,user_xattr 0 0
</code></pre>
<hr />
<p>After modifying /etc/fstab, reload it by running:</p>
<pre><code>$ sudo mount -o remount /
</code></pre>
<h2 id="Configure+CacheFiles" name="Configure+CacheFiles">Configure CacheFiles</h2>
<p>In order to set up cache back-end using <code>cachefiles</code>, you need to install <code>cachefilesd</code>, a userspace daemon for managing <code>cachefiles</code>. To install <code>cachefilesd</code> on Ubuntu or Debian:</p>
<pre><code>$ sudo apt-get install cachefilesd
</code></pre>
<p>To install <code>cachefilesd</code> on CentOS, Fedora or RedHat:</p>
<pre><code>$ sudo yum install cachefilesd
$ sudo chkconfig cachefilesd on
</code></pre>
<p>After installation, enable <code>cachefilesd</code> by editing its configuration file as follows.</p>
<pre><code>$ sudo vi /etc/default/cachefilesd
</code></pre>
<hr />
<pre><code>RUN=yes
</code></pre>
<hr />
<p>Next, mount a remote NFS share with <code>fsc</code> option:</p>
<pre><code> $ sudo vi /etc/fstab
</code></pre>
<hr />
<pre><code> 192.168.1.13:/home/xmodulo /mnt nfs rw,hard,intr,fsc
</code></pre>
<hr />
<p>Alternatively, if you mount the remote NFS share from the command line, specify <code>fsc</code> as a command-line option:</p>
<pre><code>$ sudo mount -t nfs 192.168.1.13:/home/xmodulo /mnt -o fsc
</code></pre>
<p>Finally, restart <code>cachefilesd</code>:</p>
<pre><code>$ sudo service cachefilesd restart
</code></pre>
<p>At this point, file caching should be enabled for the mounted NFS share, which means that previously accessed files in the mounted NFS share will be retrieved from local file cache. If you want to flush NFS file cache for any reason, simply restart <code>cachefilesd</code>.</p>
<pre><code> $ sudo service cachefilesd restart
</code></pre>
<p>Source: <a href="http://xmodulo.com/2013/06/how-to-enable-local-file-caching-for-nfs-share-on-linux.html">xmodule.com</a></p>
sdf.org
urn:uuid:65069855-d9e2-1fb6-6a99-731860126d08
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://sdf.org/">sdf.org</a></p>
<p>This one is an interesting site.</p>
<p>The Super Dimension Fortress is a networked community of free software authors, teachers, librarians, students, researchers, hobbyists, computer enthusiasts, the aural and visually impaired. It is operated as a recognized non-profit 501(c)(7) and is supported by its members.</p>
<p>Our mission is to provide remotely accessible computing facilities for the advancement of public education, cultural enrichment, scientific research and recreation. Members can interact electronically with each other regardless of their location using passive or interactive forums. Further purposes include the recreational exchange of information
concerning the Liberal and Fine Arts.</p>
<p>Members have UNIX shell access to games, email, usenet, chat, bboard, webspace, gopherspace, programming utilities, archivers, browsers, and more. The SDF community is made up of caring, highly skilled people who operate behind the scenes to maintain a non-commercial INTERNET.</p>
Driving Continuous Integration from Git
urn:uuid:727913cb-cdde-33f7-274c-fca7b20f1a11
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><strong>Testing, code coverage, style enforcement are all check-in and merge
requirements that can be automated and driven from Git.</strong></p>
<p>If you're among the rising number of Git users out there, you're in
luck: You can automate pieces of your development workflow with Git
hooks. Hooks are a native Git mechanism for firing off custom scripts
before or after certain operations such as commit, merge, applypatch,
and others. Think of them as henchmen for your Git repo. Pre-operation
hooks act as bouncers, guarding your repo with a velvet rope. And
post-operation hooks are your Man Friday, faithfully carrying out
follow-up tasks on your behalf.</p>
<p>Installing hooks for a Git repository is fairly straightforward, and
<a href="http://git-scm.com/book/en/Customizing-Git-Git-Hooks">well-documented</a>.
In this article, we focus on using Git hooks to augment continuous
integration practices, starting with an example that makes combining
Git and continuous integration (CI) less painful. The code is written
in Ruby. Fortunately, Ruby is a language that highly prizes readability,
so even if you don't know Ruby, you can easily follow along.</p>
<h2 id="Automate+CI+Configuration+for+Git+Branches" name="Automate+CI+Configuration+for+Git+Branches">Automate CI Configuration for Git Branches</h2>
<p>One of the blessings of Git is how easy it is to branch off and develop
in isolation. This means the master stays releasable, you get the
freedom to experiment, and your teammates aren't derailed if code from
the experimentation proves to be half-baked. One challenge of Git,
however, is how many branches a team ends up with ? scores of active
branches, most of which live for only a few days. Who is going to take
the time to set up continuous integration for all those piddly little
branches? Your henchmen, that's who.</p>
<p>To automatically apply CI to new development branches, you'll use the
"post-receive" hook type. These are server-side hooks, triggered after
pushes to the repository are completed. In such cases, you can use the
post-receive hook to fire off a script that programmatically clones a
master's CI configs and applies them to new branches using the CI
server's exposed API. It might look something like this, when using
the open-source and hugely popular <a href="http://www.jenkins-ci.org/">Jenkins</a>
CI server:</p>
<pre><code class="language-ruby">#!/usr/bin/env ruby
# Ref update hook for creating new Jenkins job
# configurations for newly pushed branches.
#
# requires Ruby 1.9.3+
require 'yaml'
require 'net/https'
require 'uri'
require 'rexml/document'
include REXML
# load ci-config.yml from hook directory
def load_config
hookDir = File.expand_path File.dirname(__FILE__)
configPath = hookDir + "/ci-config.yml"
puts configPath
raise "No ci-config.yml found." unless File.exists? configPath
YAML.load_file(configPath)
end
# Grab the configured Jenkins server
config = load_config
raise "ci-config.yml file is incomplete: missing jenkins_server" unless
config["jenkins_server"]
server = config["jenkins_server"]
raise "ci-config.yml file is incomplete: username, password, url and
default_job are required for jenkins_server" unless
server['url'] and server['username'] and
server['password'] and server['default_job']
# iterate through updated refs looking for new branches
ARGF.readlines.each { |line|
args = line.split
oldVal = args[0]
newVal = args[1]
ref = args[2]
if /^0{40}$/.match(oldVal) and ref.start_with?("refs/heads/")
# new branch!
# retrieve the jenkins job config
# TODO only need to do this once!
uri = URI.parse(
"#{server['url']}/job/#{server['default_job']}/config.xml")
req = Net::HTTP::Get.new(uri.to_s)
req.basic_auth server['username'], server['password']
http = Net::HTTP.new(uri.host, uri.port)
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
http.use_ssl = uri.scheme.eql?("https")
# execute the request
response = http.start {|http| http.request(req)}
raise "Bad response from jenkins, is your ci-config.yml correct?"
unless response.is_a? Net::HTTPOK
# parse the config.xml from the response
doc = Document.new response.body
doc.root.get_elements(
"//branches/hudson.plugins.git.BranchSpec/name").each {
# overwrite branch to be our new ref
|elem| elem.text = ref
}
# create a new request to upload the modified config.xml
newJob = ""
doc.write newJob
newJobName = ref["refs/heads/".length..-1].gsub("/", "-")
uri = URI.parse("#{server['url']}/createItem?name=#{newJobName}")
req = Net::HTTP::Post.new(uri.to_s,
initheader = {'Content-Type' => 'application/xml'})
req.basic_auth server['username'], server['password']
req.body = newJob
# upload the new job
response = http.start {|http| http.request(req)}
raise "Failed to post new job to jenkins" unless
response.is_a? Net::HTTPOK
end
}</code></pre>
<p>With this hook in place, you need only push a dev branch to the repo,
and it will automatically be put under test. (It's possible to run CI
builds against branches using a build parameter to represent the
target branch, but that muddles the build history. The cloning approach
provides a clean, clear history.) Applying every last facet of the CI
scheme to branches isn't necessary ? for example, running each and
every branch through the load test gamut might be overkill. But even
if you skip the load and UI tests, and run just unit and API- or
integration-level tests, these are huge wins.</p>
<p>The risk of introducing defects into master is greatly reduced by
testing on the branch before merging. Developers can also work more
efficiently and confidently because of the frequent feedback on
changes (instead of the old merge-then-pray technique). And for teams
who include testing as part of their definition of "done," managers
and scrum master types catch a break. With the Git hook automatically
putting branch code under test, the team's practices and values are
being enforced without the need for nag-mails or raised eyebrows during
stand-up.</p>
<h2 id="Vet+Merges+to+Master" name="Vet+Merges+to+Master">Vet Merges to Master</h2>
<p>Two hallmarks of coding craftsmanship are an affinity for automated
tests, and adherence to stylistic rules (such as avoiding empty
try/catch blocks or duplicated code). Despite best intentions, everyone
neglects best practices from time to time. That's where Git hooks come
in. Pre-receive hooks living in the central repository qualify incoming
pushes, making sure they're good enough to get past the velvet rope.
Let's look at three hooks designed to protect master from slip-ups made
on development branches.</p>
<h2 id="Require+Passing+Branch+Builds" name="Require+Passing+Branch+Builds">Require Passing Branch Builds</h2>
<p>The whole point of working on a development branch is to isolate
yourself and create a space to experiment (read: "break stuff"). So
it's natural to see failing tests on the branch while development is
in progress. When it's time to merge to master, however, things had
better be tidied up. This can be enforced programmatically with a hook
that checks to see whether the incoming push is a merge to master, and
if so, verify that all tests are passing on the branch before
processing the merge.</p>
<p>If you happen to be using Bamboo, you can cleanly fetch test results
for a given commit. If you use Jenkins or its predecessor, <a href="http://www.hudson-ci.org/">Hudson</a>,
you can fetch a set of recent build results then parse through them
to see which builds ran against the commit in question. (This hook,
and those that follow are implemented for the Bamboo CI server, but
they can be implemented in more or less the same way on all CI
servers.)</p>
<pre><code class="language-ruby">#!/usr/bin/env ruby
# Ref update hook for verifying the build status of
# a topic branch being merged into
# a protected branch (e.g. master) from a Bamboo server.
#
# requires Ruby 1.9.3+
require_relative 'ci-util'
require 'json'
# parse args supplied by git: <ref_name> <old_sha> <new_sha>
ref = simple_branch_name ARGV[0]
prevCommit = ARGV[1]
newCommit = ARGV[2]
# test if the updated ref is one we want to enforce green
# builds for exit_if_not_protected_ref(ref)
# get the tip of the most recently merged branch
tip_of_merged_branch =
find_newest_non_merge_commit(prevCommit, newCommit)
# parse our Bamboo server config
bamboo = read_config("bamboo", ["url", "username", "password"])
# query Bamboo for build results
response = httpGet(
bamboo,
"/rest/api/latest/result/byChangeset/#{tip_of_merged_branch}.json")
body = JSON.parse(response.body)
# tally the results
failed = successful = in_progress = 0
body['results']['result'].collect { |result|
case result['state']
when "Failed"
failed += 1
when "Successful"
successful += 1
when "Unknown"
if result['lifeCycleState'] == "InProgress"
in_progress += 1
end
end
}
# display a short message describing the build status for
#the merged branch and abort if necessary
if failed > 0
# at least one red build - block the branch update
abort "#{shortSha(tip_of_merged_branch)} has #{failed}
red #{pluralize(failed, 'build', 'builds')}."
elsif in_progress > 0
# at least one incomplete build - block the branch update
abort "#{shortSha(tip_of_merged_branch)} has #{in_progress}
#{pluralize(in_progress, 'build', 'builds')} that have not
completed yet."
else
# all green builds - allow the branch update
puts "#{shortSha(tip_of_merged_branch)} has #{successful}
green #{pluralize(successful, 'build', 'builds')}."
end</code></pre>
<h2 id="Enforce+Code+Coverage+Requirements" name="Enforce+Code+Coverage+Requirements">Enforce Code Coverage Requirements</h2>
<p>Along with successful test runs, you want to make sure that new code
added on development branches is tested as thoroughly as code already
on master. This ensures that the overall test coverage level of the
project doesn't drop when a development branch is merged back in. This,
too, can be checked with Git hooks.</p>
<p>A simple Git hook can verify that coverage on the branch meets the
minimum threshold. To enforce this, a hook can be created to compare
the coverage rate on master with that of the branch, and reject the
merge if the branch's coverage is inferior.</p>
<p>Most CI servers don't expose code coverage data through their remote
APIs. But there's an easy work-around: pulling down the code coverage
report. To do this, the build must be configured to publish the report
as a shared artifact, both on master and on the branch build. (Notice
how automatically cloning build configs for development branches comes
in handy here: set it up for master, and get it on the branch for free!)
Once published, you can get the latest coverage report from master by
a call to the CI server. For branch coverage, you can fetch the
coverage report either from the latest build, or for builds related to
the reference (commit) being merged, as shown here for the code
coverage tool Clover.</p>
<pre><code class="language-ruby">#!/usr/bin/env ruby
# Ref update hook for asserting the code coverage of a
# topic branch being merged into a
# protected branch (e.g. master) is the same or better
#
# requires Ruby 1.9.3+
require_relative 'ci-util'
require 'rexml/document'
include REXML
# Determine the code coverage for a particular commit by
# parsing Clover artifacts
def find_coverage(bamboo, commit)
# grab the clover.xml artifact from the build.
# This (assumes a shared artifact named
# 'clover' with 'clover.xml' at the root).
# Change this for your coverage tool?s report name.
clover_xml = shared_artifact_for_commit(bamboo, commit,
bamboo["coverage_key"], "clover/clover.xml")
doc = Document.new clover_xml
# parse out the project metrics element from the response
metrics = XPath.first(doc, "coverage/project/metrics")
# Use algorithm similar to Clover
# (https://confluence.atlassian.com/x/LoHEB) for
# determining coverage percentage
covered_elements =
metrics.attribute("coveredconditionals").value.to_i
covered_elements +=
metrics.attribute("coveredmethods").value.to_i
covered_elements +=
metrics.attribute("coveredstatements").value.to_i
elements = metrics.attribute("conditionals").value.to_i
elements += metrics.attribute("methods").value.to_i
elements += metrics.attribute("statements").value.to_i
coverage = 0
if (elements > 0)
coverage = covered_elements / elements
end
coverage
end
# parse args supplied by git: <ref_name> <old_sha> <new_sha>
ref = simple_branch_name ARGV[0]
prevCommit = ARGV[1]
newCommit = ARGV[2]
# test if the updated ref is one we want to enforce
# green builds for exit_if_not_protected_ref(ref)
# get the tip of the most recently merged branch
tip_of_merged_branch =
find_newest_non_merge_commit(prevCommit, newCommit)
# parse our bamboo server config
bamboo = read_config("bamboo",
["url", "username", "password", "coverage_key"])
# calculate code coverage for the old and new commits
prev_coverage = find_coverage(bamboo, prevCommit)
new_coverage = find_coverage(bamboo, tip_of_merged_branch)
# if the coverage has dropped for the new commit, block the update
if prev_coverage > new_coverage
abort "Code coverage for #{shortSha(tip_of_merged_branch)} is
only #{new_coverage}! #{ref} is currently at #{prev_coverage}."
else
# if the coverage has increased, TFCIT
puts "Nice work! Code coverage for #{ref} has
increased by #{new_coverage - prev_coverage}."
end</code></pre>
<h2 id="Enforce+Good+Coding+Style" name="Enforce+Good+Coding+Style">Enforce Good Coding Style</h2>
<p>Tests are something no self-respecting software project can do without,
but they only tell part of the story. Open source tools such as
<a href="http://checkstyle.sourceforge.net/">Checkstyle</a> and
<a href="http://findbugs.sourceforge.net/">Findbugs</a> scour your codebase and
provide reports on stylistic violations ? anything from duplicated
code to excessively long methods to the use of deprecated methods.
These are hard-won guidelines, and they exist for a reason: Ignoring
them can result in code being harder to understand, harder to maintain,
and more vulnerable to runtime problems.</p>
<p>As with code coverage, each team has a different level of tolerance
for unstylish code. But introducing more style violations is almost
universally agreed-upon as undesirable. In this, Git hooks come to the
rescue. Build artifacts come into play here as well since you can
easily retrieve the violations report. (No CI server we're aware of
exposes static analysis data via remote access API.) So you can create
another pre-receive hook that checks violations for master and the dev
branch, and rejects the push if it would introduce additional errors
into master.</p>
<pre><code class="language-ruby">#!/usr/bin/env ruby
# Ref update hook for asserting that a topic branch
# being merged into a protected
# branch (e.g. master) does not introduce an increase in
# checkstyle violations
#
# requires Ruby 1.9.3+
require_relative 'ci-util'
require 'rexml/document'
include REXML
# This example
def count_checkstyle_violations(bamboo, commit)
# grab the checkstyle.xml artifact from the
# build (assumes a shared artifact named
# 'checkstyle' with 'checkstyle-result.xml' at the root)
checkstyle_xml =
shared_artifact_for_commit(bamboo, commit,
bamboo["checkstyle_key"],
"checkstyle/checkstyle-result.xml")
doc = Document.new checkstyle_xml
# could go to town on the comparison here - but let's just count
# the raw number of errors for the time being
XPath.match(doc, "//error").length
end
# parse args supplied by git: <ref_name> <old_sha> <new_sha>
ref = simple_branch_name ARGV[0]
prevCommit = ARGV[1]
newCommit = ARGV[2]
# test if the updated ref is one we want to enforce green builds for
exit_if_not_protected_ref(ref)
# get the tip of the most recently merged branch
tip_of_merged_branch =
find_newest_non_merge_commit(prevCommit, newCommit)
# parse our bamboo server config
bamboo = read_config("bamboo",
["url", "username", "password", "checkstyle_key"])
# calculate number of checkstyle violations for
#the old and new commits
prev_violations =
count_checkstyle_violations(bamboo, prevCommit)
new_violations =
count_checkstyle_violations(bamboo, tip_of_merged_branch)
# if the number of checkstyle violations has increased, block the update
if prev_violations > new_violations
abort "#{shortSha(tip_of_merged_branch)}
has #{new_violations} checkstyle violations! #{ref}
currently has only #{prev_violations}."
else
# if the number of checkstyle violations has
# decreased, send kudos to the dev
puts "Nice work! #{ref} has #{new_violations - prev_violations}
fewer checkstyle violations than before."
end</code></pre>
<p>To get the original source code and surrounding config files for all
the server-side hooks you've seen here, clone the repo at:
<a href="https://bitbucket.org/tpettersen/git-ci-hooks">bitbucket.org</a>.</p>
<h2 id="Think+Globally%2C+Hook+Locally" name="Think+Globally%2C+Hook+Locally">Think Globally, Hook Locally</h2>
<p>We know that the sooner an issue is discovered, the easier (and faster
and cheaper) it is to fix. That's why hooks that operate on local
clones of a repository are so useful: They offer immediate feedback.
Because we don't get the cmd prompt back until a hook completes,
client-side hooks should be limited to operations that take only a few
seconds, lest the development flow be interrupted. Let's look at two
hooks that complete almost instantly.</p>
<h2 id="Get+Branch+Build+Status" name="Get+Branch+Build+Status">Get Branch Build Status</h2>
<p>Exposing branch build status in the terminal window with a
post-checkout hook catches two fish with one worm: It provides
actionable information, and eliminates the need to switch applications
to get it. Upon checkout (and remember, in Git "checkout" means
switching branches, not pulling down code as with SVN and Perforce),
this hook grabs the branch's head revision number from the local copy.
It then queries the CI server to see whether that revision has been
built, and if so, whether the build succeeded.</p>
<pre><code class="language-ruby">#!/usr/bin/env ruby
# post-checkout hook for determining the build status of the
# checked out ref from the CI server.
#
# Requires Ruby 1.9.3+
require 'yaml'
require 'json'
require 'net/https'
require 'uri'
# utility for correctly pluralizing quantities
def pluralize count, single, multiple
count == 1 ? single : multiple
end
# parse args supplied by git
ref = ARGV[1] # ref being checked out
isBranch = ARGV[2] # 0 = file checkout, 1 = branch checkout
# we only care about branch checkouts
if isBranch == "1"
# initialise build status counts
failed = successful = in_progress = 0
# loop through each configured Stash server, retrieving build
# statuses for the checked out commit and
# counting the number of failed, successful and in progress builds
hookDir = File.expand_path File.dirname(__FILE__)
configPath = hookDir + "/bamboo-config.yml"
raise "No bamboo-config.yml found." unless File.exists? configPath
config = YAML.load_file(configPath)
raise "bamboo-config.yml file is incomplete:
username, password & url are required" unless
config['url'] and config['username'] and config['password']
# normalize base url
baseUrl = config['url']
# assume https if no scheme spcified
if not baseUrl.start_with? "http"
baseUrl = "https://#{baseUrl}"
end
# strip trailing slashes
while baseUrl.end_with? "/"
baseUrl = baseUrl[0..-2]
end
# prepare a request to hit the build status REST end-point
build_status_resource =
"#{baseUrl}/rest/api/latest/result/byChangeset"
uri = URI.parse("#{build_status_resource}/#{ref}")
req = Net::HTTP::Get.new(uri.to_s, initheader =
{'Content-Type' => 'application/json',
'Accept' => 'application/json'})
req.basic_auth config['username'], config['password']
http = Net::HTTP.new(uri.host, uri.port)
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
http.use_ssl = uri.scheme.eql?("https")
# execute the request
response = http.start {|http| http.request(req)}
if not response.is_a? Net::HTTPOK
puts 'An unknown error occurred while querying
Bamboo for build results.'
exit
else
# if the request succeeded, count
# the statuses from the response
body = JSON.parse(response.body)
body['results']['result'].collect { |result|
case result['state']
when "Failed"
failed += 1
when "Successful"
successful += 1
when "Unknown"
if result['lifeCycleState'] == "InProgress"
in_progress += 1
end
end
}
end
# display a short message describing the build status
# for the checked out commit
shortRef = ref[0..7]
if failed > 0
puts "Warning! #{shortRef} has #{failed}
red #{pluralize(failed, 'build', 'builds')}
(plus #{successful} green and #{in_progress}
in progress).\nDetails: #{uri}"
elsif successful == 0
puts "#{shortRef} hasn't built yet."
else
puts "#{shortRef} has #{successful} green
#{pluralize(successful, 'build', 'builds')}."
end
end</code></pre>
<p>If, for example, the hook tells you the head commit on the master has
built successfully, then it's a "safe" commit to create a feature
branch from. Or let's say the hook says the build for that revision
failed, yet the team's wallboard shows a green build for that branch
(or vice versa). That means the local copy is out-of-date. Whether to
pull down the updates is determined on a case-by-case basis.</p>
<p>This hook and its config files can be found at <a href="https://bitbucket.org/tpettersen/post-checkout-build-status">bitbucket</a>.</p>
<h2 id="Sanity-Check+Code+Style" name="Sanity-Check+Code+Style">Sanity-Check Code Style</h2>
<p>Checking for violations at merge time is great, but a pre-commit hook
analyzing the changeset keeps the style police off your back entirely.
Start by capturing the names of files being updated or added and
concatenating them. That string of file names is then passed into the
Checkstyle run command. If violations are found, the commit is rejected.</p>
<p>Note that despite variations between them, all static analysis tools
can be used with this approach. Findbugs, for example, must be run on
the entire project because it looks at methods referenced across
classes. But that's not necessarily a deal-breaker. Small and
medium-sized projects can be fully analyzed quickly, especially if
a generous heap space is allocated to the process.</p>
<h2 id="Come+As+You+Are" name="Come+As+You+Are">Come As You Are</h2>
<p>All the ideas presented here are vendor-neutral. Git hooks may not
revolutionize software development the way continuous integration
has, but every time a task, practice or rule is automated, it's a
win.</p>
<p>From <a href="http://www.drdobbs.com/architecture-and-design/driving-continuous-integration-from-git/240161383">Dr. Dobbs Journal</a></p>
Off site backup options
urn:uuid:0f97ed5e-3a47-8e75-a5c9-71d146429a31
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is my working notes on doing off-site backups to the cloud.
Still trying to figure out where to keep Offsite backups.</p>
<p>These are the candidates:</p>
<table>
<thead>
<tr>
<th>Site</th>
<th>Free Quota</th>
<th>100GB/Yr</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>AltDrive</td>
<td>30 day</td>
<td>USD 45</td>
<td>Unlimited, Linux binary</td>
</tr>
<tr>
<td>iDrive</td>
<td>5GB</td>
<td>USD 6</td>
<td>Starts at 1TB, Linux binary, API</td>
</tr>
<tr>
<td>pCloud</td>
<td>20GB</td>
<td>USD 10</td>
<td>Starts at 500GB, Binary, Rest API, WebDAV</td>
</tr>
<tr>
<td>DropBox</td>
<td>2GB</td>
<td>USD 12</td>
<td>Starts at 1TB, ZYPKG available</td>
</tr>
<tr>
<td>CopyCom</td>
<td>15GB</td>
<td>USD 20</td>
<td>starts at 250GB, binary</td>
</tr>
<tr>
<td>MEGA</td>
<td>50GB</td>
<td>USD 20</td>
<td>Linux client, Starts at 500GB.</td>
</tr>
<tr>
<td>Google</td>
<td>15GB</td>
<td>USD 24</td>
<td>ZYPKG available</td>
</tr>
<tr>
<td>SkyDrive</td>
<td>15GB</td>
<td>USD 24</td>
<td>WebDav</td>
</tr>
<tr>
<td>MemoPal</td>
<td>3GB</td>
<td>USD 25</td>
<td>WebDav, ZYPKG available</td>
</tr>
<tr>
<td>ADrive</td>
<td>60 days</td>
<td>USD 25</td>
<td>FTP or WebDAV</td>
</tr>
<tr>
<td>MemoPal</td>
<td>3GB</td>
<td>USD 25</td>
<td>starts at 200GB, Binary or WebDav, ZYPKG available</td>
</tr>
<tr>
<td>iDriveSync</td>
<td>5GB</td>
<td>USD 33</td>
<td>WebDav</td>
</tr>
<tr>
<td>Amazon S3</td>
<td>5GB/1yr</td>
<td>USD 36</td>
<td>pay-per-use, REST API</td>
</tr>
<tr>
<td>box.com</td>
<td>10GB</td>
<td>USD 48</td>
<td>Uses WebDAV</td>
</tr>
<tr>
<td>Crashplan</td>
<td>1 month</td>
<td>USD 48</td>
<td>Unlimited, Binary</td>
</tr>
<tr>
<td>OtherDrive</td>
<td>2GB</td>
<td>USD 55</td>
<td>Java client</td>
</tr>
<tr>
<td>4shared</td>
<td>15GB</td>
<td>USD 78</td>
<td>WebDAV + FTP, Max 100GB?</td>
</tr>
<tr>
<td>CloudMe</td>
<td>3GB</td>
<td>USD 96</td>
<td>WebDAV</td>
</tr>
</tbody>
</table>
<h3 id="Todo" name="Todo">Todo</h3>
<ul>
<li>checkout google drive and dropbox.</li>
</ul>
<p>This is another tool to compare vendors <a href="http://www.cloudwards.net/articles/online-backup/">CloudWards</a></p>
<ul>
<li>iDrive API info:
<ul>
<li><a href="https://github.com/idrivevangelist">Sample code</a></li>
<li><a href="http://evs.idrive.com/web-developers-guide.htm">Reference</a></li>
</ul></li>
</ul>
No Comment
urn:uuid:4b2cb150-420e-e956-a18d-d7e54f562285
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2013/leadership.jpg" alt="About Leadership" /></p>
<p>This should require no explanation...</p>
Alarm Notification
urn:uuid:567023cd-5920-685c-96da-c3e83713a984
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This tutorial describes how to use the alarm manager to set alarms and how to use the notification framework to display them. In short, the sequence goes like this:</p>
<ol>
<li>In an Activity AlarmManager.set is called with a PendingIntent containing a Uri.</li>
<li>When the alarm goes off, the Uri is called triggering a BroadcastReceiver.</li>
<li>In the BroadcastReceiver NotificationManager.notify is called with a PendingIntent.</li>
<li>When the notification is clicked, the Activity in the PendingIntent is started.</li>
</ol>
<h2 id="Alarm+Manager" name="Alarm+Manager">Alarm Manager</h2>
<p>The Alarm Manager is a SystemService so it should be gotten like this</p>
<pre><code>AlarmManager am = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
</code></pre>
<p>It's set method can take parameters to define when, how and what to set off for the alarm. To set an absolute time and have it go off even if the device is on stand-by, use the <code>RTC_WAKEUP</code> type. The PendingIntent parameter is what gets called when the alarm goes off. Unless you want an Activity to start when the alarm goes off, a broadcast-type intent should be used like this</p>
<pre><code>PendingIntent pendingintent = PendingIntent.getBroadcast(Activity.this, 0, intent, Intent.FLAG_GRANT_READ_URI_PERMISSION);
</code></pre>
<p>The intent parameter can hold a Uri which can contain some information about what the alarm is all about.</p>
<h2 id="BroadcastReceiver+and+NotificationManager" name="BroadcastReceiver+and+NotificationManager">BroadcastReceiver and NotificationManager</h2>
<p>The BroadcastReceiver must be defined in the manifest.xml like this</p>
<pre><code> <receiver
android:name="package.AlarmReceiver"
>
<intent-filter>
<action
android:name="intentname" />
<data
android:scheme="myscheme" />
</intent-filter>
</receiver>
</code></pre>
<p>The intentname and myscheme values must match the name used in the intent and the scheme used in the uri in the intent. In the onReceive method, the notification is started to show the user the alarm went off. The Notification Manager is also a SystemService, get it like this</p>
<pre><code>NotificationManager nm = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE);
</code></pre>
<p>Create a new Notification object with an icon, a title and the time (probably <code>System.currentTimeMillis()</code>) Set some flags into the defaults like this</p>
<pre><code>notification.defaults |= Notification.DEFAULT_SOUND;
notification.defaults |= Notification.DEFAULT_VIBRATE;
</code></pre>
<p>Use the <code>setLatestEventInfo</code> method to set another PendingIntent into the notification. The Activity in this intent will be called when the user clicks the notification. Again you can include a Uri into the intent to pass data to the Activity about which notification was clicked. Finally, be sure to use a unique id when calling the Notification managers notify method to actually fire the notification.</p>
<h2 id="Activity" name="Activity">Activity</h2>
<p>When the user has clicked the notification is it probably safe to remove it. In the Activity which gets called by the notification, a call to the NotificationManagers cancel can be used to do this. The id which was used to fire the notification in the BroadcastReceiver will let the system know which notification to remove.</p>
<h2 id="Reloading" name="Reloading">Reloading</h2>
<p>Android's Alarm Manager does not remember alarms when the device reboots. In order to restore the alarms you need to take these steps:</p>
<ol>
<li>In the Activity which sets the alarms, also save information about each alarm into a database.</li>
<li>Create an additional BroadcastReceiver which gets called at boot-up to re-install the alarms.</li>
</ol>
<p>Add the <code>android.permission.RECEIVE_BOOT_COMPLETED</code> uses-permission and the following receiver to the manifest.xml</p>
<pre><code> <receiver
android:name="package.AlarmSetter"
>
<intent-filter>
<action
android:name="android.intent.action.BOOT_COMPLETED" />
</intent-filter>
</receiver>
</code></pre>
<p>In the onReceive method, read the database and re-install the alarms in the same way as was done at the top.</p>
<h2 id="Proximity+Alerts" name="Proximity+Alerts">Proximity Alerts</h2>
<p>A similar mechanism can be used for proximity alerts. The LocationManager will fire an alert which is nearly identical to an alarm. The same mechanism to restore alerts can be used after rebooting the device.</p>
DID vendors
urn:uuid:0b48a579-578c-7a20-9dcc-3d6565de7540
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So I have been researching DID vendors with limited success. So far my
leading candidates are:</p>
<table>
<thead>
<tr>
<th>Vendor</th>
<th>Country</th>
<th>Set-up fee</th>
<th>Monthly fee</th>
<th>Per-Minute</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sonetel</td>
<td>NL</td>
<td>EUR 1.40</td>
<td>EUR 1.40</td>
<td>EUR 0.01</td>
</tr>
<tr>
<td>Sonetel</td>
<td>Peru</td>
<td>EUR 5.50</td>
<td>EUR 5.50</td>
<td>EUR 0.01</td>
</tr>
<tr>
<td>Sonetel</td>
<td>USA</td>
<td>EUR 0.70</td>
<td>EUR 0.70</td>
<td>EUR 0.01</td>
</tr>
<tr>
<td>twilio</td>
<td>NL</td>
<td>USD 1.00</td>
<td>USD 0.01</td>
</tr>
<tr>
<td>twilio</td>
<td>Peru</td>
<td>USD 5.00</td>
<td>USD 0.01</td>
</tr>
<tr>
<td>twilio</td>
<td>USA</td>
<td>USD 1.00</td>
<td>USD 0.01</td>
</tr>
<tr>
<td>callcentric</td>
<td>USA</td>
<td>Free</td>
<td>Free</td>
<td>Free</td>
</tr>
<tr>
<td>callcentric</td>
<td>USA</td>
<td>USD 3.95</td>
<td>USD 1.95</td>
<td>USD 0.015</td>
</tr>
<tr>
<td>callcentric</td>
<td>NL</td>
<td>USD 7.95</td>
<td>USD 7.94</td>
<td>USD 0.00</td>
</tr>
<tr>
<td>callcentric</td>
<td>Peru</td>
<td>USD 9.95</td>
<td>USD11.95</td>
<td>USD 0.00</td>
</tr>
</tbody>
</table>
<p>Note that <a href="http://sonetel.com">Sonetel</a> has a free trial number
available so it would probably be better for initial set-up. The
<a href="http://www.callcentric.com/did/">callcentric</a> has free numbers
(limited area coverage). It also has interesting phone rates for
outgoing calls.</p>
Parsing JSON in Shell scripts
urn:uuid:4b85e9c1-c2e5-361c-15a7-8c2bc0ff8180
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This can be simple by using <a href="http://stedolan.github.io/jq/">jq</a>.</p>
<p>This is a command line JSON processor. Here are a couple of examples of what can be done:</p>
<pre><code>$ cat json.txt
{
"name": "Google",
"location":
{
"street": "1600 Amphitheatre Parkway",
"city": "Mountain View",
"state": "California",
"country": "US"
},
"employees":
[
{
"name": "Michael",
"division": "Engineering"
},
{
"name": "Laura",
"division": "HR"
},
{
"name": "Elise",
"division": "Marketing"
}
]
}
</code></pre>
<p>To parse a JSON object:</p>
<pre><code>jq '.name' < json.txt
"Google"
</code></pre>
<p>To parse a nested JSON object:</p>
<pre><code>$ jq '.location.city' < json.txt
"Mountain View"
</code></pre>
<p>To parse a JSON array:</p>
<pre><code>$ jq '.employees[0].name' < json.txt
"Michael"
</code></pre>
<p>To extract specific fields from a JSON object:</p>
<pre><code>$ jq '.location | {street, city}' < json.txt
{
"city": "Mountain View",
"street": "1600 Amphitheatre Parkway"
}
</code></pre>
Yealink W52P
urn:uuid:02cd75de-5c0d-8e6d-d3c0-9bea19eac9aa
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://www.yealink.com/product_info.aspx?ProductsCateID=308">Yealink W52P</a></p>
<p><img src="/images/2013/W52PwebRpicture20X20CM-01590427430.jpg" alt="phone" /></p>
<p>So I was looking to replace my analog cordless phones mainly because I wanted to have a centralized way to maintain phonebooks. Right now I have two cordless phone that I have to manually enter phonebook entries on the two handsets independently.</p>
<p>Initially I was thinking of getting small/cheap Android tablet and load it with a SIP soft phone. Trying with a couple of tablets I had was not very successful. On one hand my network topology did not work very well, on the other hand, the integration of the SIP soft phone with the directory and the other phone functions did not work as well as I expected.</p>
<p>So when I came across the W52P, I was initially attracted to the low price. Grandstream had a cheaper phone, but it did not have remote phonebooks. After checking the documentation of the W52P, I confirmed that it did have a remote phonebook functionality. So bought it and tried it out.</p>
<p>As a phone itself, it is about the same as the analog phones that it was replacing. The voice quality was pretty good.</p>
<p>Configuring the remote phone book was not as straight forward as I would have hoped. I was reusing the same phonebook script that I had used for my Grandstream phone. But I was getting <code>"CONNECT ERROR"</code> when I tried to use the remote phone. This was not very useful trying to figure out what was wrong. Turns out, because I was using a dynamic script, the script was not setting the <code>Content-Length</code> HTTP header. This apparently caused the phonebook not to download. Calculating the <code>Content-Length</code> header and setting it made the system work like a charm.</p>
Grandstream GXP1400
urn:uuid:6333cc84-73f0-5888-5c21-ccafa718dfe7
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://www.grandstream.com/products/ip-voice-telephony/enterprise-ip-phones/product/gxp1400/1405">Grandstream GXP1400</a></p>
<p><img src="/images/2013/gxp1400.jpg" alt="gs" /></p>
<p>The other day I replaced an analog phone with a Grandstream GXP1400 IP phone. I think it is a great value phone. It is one of the cheapest I could find yet supports all the features I was looking.</p>
<p>Specifically I wanted a IP phone that could:</p>
<ol>
<li>Have a remote phone directory</li>
<li>Speaker phone</li>
</ol>
<p>Setting it up was simple. Creating an account on Asterisk and providing the details for it to the phone. Creating a phone directory was also quite simple. I wrote a small PHP script to manage that.</p>
Backing up GMail
urn:uuid:3d522e6e-bd7d-acdf-2195-bab4d1ae36e4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The other day I found <a href="http://gmvault.org/index.html">Gmvault</a>.</p>
<p>Gmvault is an open source Gmail backup software written in Python.</p>
<p>This article provides a good overview on how it works (found it better than the Gmvault documentation):</p>
<ul>
<li><a href="http://xmodulo.com/2013/08/how-to-back-up-and-restore-gmail-account-on-linux.html">How to back up and restore Gmail account on Linux</a></li>
</ul>
<p>It uses IMAP to connect to Gmail and also stores files in <code>.eml</code> (plain text) formatted fails. It has a converter to export to mbox and Maildir formats.</p>
<p>Probably would be good for archiving and using <a href="http://freecode.com/projects/udmsearch">mnoGoSearch</a> for search front end.</p>
BOX.com promotions
urn:uuid:d66bd93d-e328-8342-1a17-3bf4a8f570af
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a good link to keep an eye on: <a href="https://support.box.com/entries/22057282-box-promotions-faq">box.com promotions</a></p>
Alternative to DynDNS
urn:uuid:b4107807-b803-22d8-b0fe-cecf115e17d1
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://linuxaria.com/howto/dynamic-dns-with-bash-afraid-org">linuxaria blog article</a> This article has a script how to use Dynamic DNS on <a href="http://freedns.afraid.org/">afraid.org</a>.</p>
assist
urn:uuid:a642dfc3-e960-555f-6993-7573fc80f64f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Assist is my <a href="http://www.archlinux.org/">archlinux</a> scripted installation script.</p>
<ul>
<li><a href="https://github.com/alejandroliu/assist">https://github.com/alejandroliu/assist</a></li>
</ul>
<p>By default it gives you a menu driven <a href="http://www.archlinux.org/">archlinux</a> installation with supposedly <em>sensible</em> defaults.</p>
<p>It has command line hooks so that you can perform automated installs using bash scripts to customize it.</p>
<p>It can be deployed from the CDROM by downloading and executing Assist directly from the Internet or by injecting it into the init ramdisk for deployment either from PXE or a custom boot CDROM.</p>
SSH Tricks
urn:uuid:43680883-ee34-4f91-6851-c1893c53b8dd
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A bunch of stupid SSH tricks that can be useful somehow, somewhere...</p>
<h2>Forcing either IPv4 or IPv6</h2>
<p>This is for the scenario that you know which specific protocol works<br />
to reach a particular host. Usually good to eliminate the delay<br />
for SSH to figure out to switch IP protocols. For IPv4:</p>
<pre><code>ssh -4 user@hostname.com</code></pre>
<p>For IPv6</p>
<pre><code>ssh -6 user@hostname.com</code></pre>
<h2>Reuse a SSH connection</h2>
<p>Rather than start a new TCP connection to a remote host, simply
multiplex over an existing connection: Add to your <code>~/.ssh/config</code> the
following lines:</p>
<pre><code>Host *
ControlMaster auto
ControlPath /tmp/%r@%h:%p
ControlPersist 4h
# Another option for Control Path
ControlPath ~/.ssh/%r@%h:%p</code></pre>
<h2>Enable compression</h2>
<p>Use the <code>-C</code> option. Or in the config file:</p>
<pre><code>Compression yes</code></pre>
<h2>Using cheaper cyphers</h2>
<p>Using less computation-heavy ciphers in SSH, so that less time is spent
during encryption/decryption. The default <strong>AES</strong> cipher used by
OpenSSH is known to be slow. An independent study shows that
<strong>arcfour</strong> and <strong>blowfish</strong> ciphers are faster than <strong>AES</strong>.
<strong>blowfish</strong> is a fast block cipher which is also very secure.
Meanwhile, <strong>arcfour</strong> stream cipher is known to have vulnerabilities.
So use caution when using <strong>arcfour</strong>. Use the <code>-c blowfish-cbc,arcfour</code>
option or in the config file:</p>
<pre><code>Ciphers blowfish-cbc,arcfour</code></pre>
<h2>Improve Session Persistence</h2>
<pre><code>ServerAliveInterval 60
ServerAliveCountMax 10
TCPKeepAlive no</code></pre>
<p>Counterintuitively, setting this results in fewer disconnections from
your host, as transient TCP problems can self-repair in ways that fly
below SSH's radar. You may not want to apply this to scripts that work
via SSH, as "parts of the SSH tunnel going non-responsive" may work in
ways you neither want nor expect!</p>
Running Windows on Linux for Free
urn:uuid:f31fa1a7-bd5e-7679-1b44-c55766d24fd8
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Microsoft is now making available Windows VM image for testing Internet Explorer for free. You can find them at: <a href="http://www.modern.ie/en-us">Modern IE testing</a> Currently the following versions are available:</p>
<ul>
<li>Windows XP Professional SP3 + IE 6 or 8</li>
<li>Windows Vista + IE 7</li>
<li>Windows 7 + IE 8, 9, 10 or 11</li>
<li>Windows 8 + IE 11</li>
<li>Windows 8.1 Preview + IE 11</li>
</ul>
<p>If Administrator access is needed the password is:</p>
<pre><code>Passw0rd!
</code></pre>
Remote VirtualBox
urn:uuid:0c0fea6c-56f5-3a62-f120-05d6e2990bf0
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><a href="http://knobgoblin.org.uk/" title="RemoteBox">RemoteBox</a> is a Remote VirtualBox UI. It is similar <a href="http://sourceforge.net/projects/phpvirtualbox/" title="phpVirtualBox">phpVirtualBox</a> in that allows to manage VirtualBox remotely (on a potentially headless server). They differ in their requirements:</p>
<ul>
<li><a href="http://knobgoblin.org.uk/" title="RemoteBox">RemoteBox</a> does not require much on the server, but you need to install it on the client.</li>
<li><a href="http://sourceforge.net/projects/phpvirtualbox/" title="phpVirtualBox">phpVirtualBox</a> only requires a browser and rdp viewer on the client, but requires a web server with PHP support on the server.</li>
</ul>
IPv6 testing
urn:uuid:d6bdb8ad-f8ff-3384-c1fe-7e5e5a164a79
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>When trying to get on-to the IPv6 Internet, here are a couple of links to do diagnostics:</p>
<ul>
<li><a href="http://www.subnetonline.com/pages/ipv6-network-tools/online-ipv6-ping.php">http://www.subnetonline.com/pages/ipv6-network-tools/online-ipv6-ping.php</a><br />
This actually contain generic network tools.</li>
<li><a href="http://ds.testmyipv6.com/">http://ds.testmyipv6.com/</a><br />
Confirm if your browser is connecting through IPv6</li>
<li><a href="http://test-ipv6.com/">http://test-ipv6.com/</a><br />
Check also DNS</li>
</ul>
PingTool.org
urn:uuid:67f32ed0-c8db-c550-dc04-a2211278d0f2
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Another short and sweet. This web site provides a number of on-line
tools. Useful for diagnosing problems when setting a home server.</p>
<p><a href="http://pingtool.org/">http://pingtool.org/</a></p>
Web Backups
urn:uuid:9e4288cc-03b8-915a-e146-ead50b58fb76
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2013/bb-images.jpg" alt="cfback" /></p>
<p>As usual with any IT system backups are important. This does not change when using a free shared hosting provider. Because it is free, one would argue it is even more important.</p>
<p>For my wordpress web site I used something called <a href="https://github.com/Automattic/WordPress-CLI-Exporter">cli-exporter</a>. It let's you create "Wordpress" export files from the command line so it can be run from <code>cron</code>. This is important because backups <em>have</em> to be automated.</p>
<p>In addition to that, I copy the backup files to an off-site location. I do this by copying files using WebDAV to a storage provider. I did this by writing a simple script and using the PHP library <a href="http://code.google.com/p/sabredav/wiki/WebDAVClient">SabreDAV</a> which makes writing DAV clients quite easy.</p>
<p>I myself don't mind using other people's Open Source code to do something. I was actually surprised that I was not able to find something that meet my criteria. However, thanks to the power of open source I was able to find something that fit the bill exactly.</p>
<p>To make things more interesting, because I wanted to keep backup files as compressed Zip archives, my backup scripts did not work in one of the web hosts that I was using. They did not have the <code>zip</code> extensions enabled. This is surprising considering is quite standard. Luckily I was able to find a pure PHP library <a href="http://www.phpconcept.net/pclzip/">pclzip</a>.</p>
Mini-Howto: Setup proxy on Ubuntu
urn:uuid:7e044ab8-dfaa-dcc8-1eff-5121ce79205c
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A quick and dirty mini-howto to setup a proxy on Ubuntu.
This is meant mostly for doing quick setup of a proxy
on a cloud environment.</p>
<p><img src="/images/2013/logo-ubuntu_su-orange-hex.jpg" alt="logo-ubuntu_su-orange-hex" /></p>
<ol>
<li>Install Squid with the following command at the Linux command prompt:
<code>sudo apt-get install squid</code></li>
<li>Edit the Squid config file in <code>/etc/squid</code> adding these lines:
<code>http_access allow local_net</code>
<code>acl local_net src 10.10.0.0/255.255.0.0</code></li>
<li>Save the file, exit the editor and restart Squid. You are now ready to configure your browser to use the proxy server.</li>
<li>Click "Tools," "Options," "Advanced," "Network" and "Settings" in Firefox, which is the normal Ubuntu Linux browser. Select "Manual Proxy Configuration," enter the IP address of your proxy server, enter port 3128 in the Port field and then click "OK."</li>
</ol>
<p>References:</p>
<ul>
<li><a href="http://science.opposingviews.com/set-up-secure-proxy-server-ubuntu-linux-23184.html">http://science.opposingviews.com/set-up-secure-proxy-server-ubuntu-linux-23184.html</a> by Alan Hughes</li>
</ul>
Using CloudFlare
urn:uuid:38397aac-3e8a-0928-59cd-1f1a0ac8373e
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So I have signed up <code>0ink.net</code> to use the <a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a> service.</p>
<p><img src="/images/2013/cf-logo-v-rgb.png" alt="CFLogo" /></p>
<p><a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a> is a reverse proxy service that is supposed to speed up and improve web server security.</p>
<p>This is done by:</p>
<ul>
<li>globally distributed reverse proxy cache <a href="http://www.cloudflare.com/system-status.html" title="Cloudflare status">network</a></li>
<li>filters incoming request for attacks</li>
<li>optimize content (i.e. compressing, removing redundnat text, etc).</li>
<li>improving retrieving of web pages that have multiple components.</li>
</ul>
<p>For it to work they need to take over your DNS service. That means that your DNS records resolve to <a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a> servers. So when editing your DNS records, the <a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a> DNS editor has an extra settings that allows you to control if that DNS entry would use the <a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a> network or not.</p>
<p>So if you want to be able to access your web server (i.e. www) for <code>ftp</code> or <code>ssh</code>, then you need
to create an additinional CNAME record that points to the web server but set to by-pass the <a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a> network.</p>
<p>Some tips on what to do after install cloudflare can be found <a href="http://blog.cloudflare.com/top-tips-after-installing-cloudflare" title="Tips on using Cloudflare">here</a>.</p>
<p>This is handy command to test if your web-server is having problems but not <a href="http://www.cloudflare.com" title="CloudFlare">CloudFlare</a></p>
<pre><code> curl -v -A firefox/4.0 -H 'Host: yourdomain.com' YourServerIP</code></pre>
Askozia Desktop Appliance
urn:uuid:bb340183-7be3-3ed6-5b87-ed10d9e671bf
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><img src="/images/2013/askozia_logo.png" alt="Askozia Logo" /></p>
<p>So last weekend finally had some time to work with a <a href="http://askozia.com/" title="Askozia PBX">Askozia</a> Desktop Appliance.</p>
<p>It actually arrived much earlier but without a Power Supply. Initially I though, "this is strange; I didn't know this supported PoE". (Power Over Ethernet). It turns out it didn't and there was a shipping mistake. After contacting the vendor, they sent me the required Power Supply.</p>
<p>Overall I think the product is quite nice. It has a very nice User Interface that is quite easy to use. Simple configurations are indeed very easy to set-up.</p>
<p>My feeling is that, as with any GUI, it usually trades user-friendly with expressiveness. So while I could configure most of the things I wanted from the UI, it did not support my home network topology fully.</p>
<p>Initially, I had a DMZ vs Home-LAN configuration, with the Askozia box in the DMZ. Because the separation between the DMZ and the Home-LAN was through the router, it considered all the IP phones (in the Home-LAN) on the other side of the NAT, so things did not work properly.</p>
<p>After moving the appliance to the same LAN as the IP phones things started working properly. Normally under Asterisk this would be solved through the "localnets" settings. The UI obviously did not expose this setting.</p>
<p>Next time I will try to use the <a href="http://askozia.com/handbook/index.php?title=Help_for_Integrators" title="Askozia Handbook: Integrator Panel">Integrator Panel</a> as it is supposed to expose the Asterisk configuration files directly.</p>
CipherUSB
urn:uuid:bdc7fe3b-8460-8770-5226-941932184e24
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is an interesting concept. Essentially is an Encryption dongle
that encrypts stuff between your PC and your USB mass storage device.</p>
<p><a href="http://www.addonics.com/products/cipherusb.php">Addonics Product: CipherUSB</a>.</p>
Upgrading pacman config files
urn:uuid:ebb811df-0c59-bafd-0fbd-65bb68af962f
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So when upgrading software packages sometimes you need to merge changes. My recipe in <strong>archlinux</strong> is as follows:</p>
<ol>
<li>Look for *<strong>.pacnew</strong> files.</li>
<li>Retrieve the original version (from /var/cache/pacman) from the old source package.</li>
<li>Use a 3 way merge tool between old version, current file and the pacnew file.</li>
</ol>
<p>These are my options for merging:</p>
<ul>
<li>diff3 -m : Merges the changes into a single file (Use -m option)</li>
<li><a href="http://diffuse.sourceforge.net/" title="diffuse">diffuse</a></li>
<li><a href="http://meldmerge.org/" title="meld merge">meld</a></li>
</ul>
libmspack
urn:uuid:417921e8-fa73-6e77-dc97-b97f050bc093
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Recently I found this Open Source project. Apparently it recently
gained support to unpack <strong>Exchange Offline Address Book</strong> files. What
I don't know is after you unpack it, how would you use such a file.
Intriguing but apparently falls a little bit short. Probably would need
to try it out for myself to see how it works.</p>
<p><a href="http://www.cabextract.org.uk/libmspack/">libmspack</a>.</p>
atratus project
urn:uuid:261d494d-911e-ac38-4004-211973e9a2c9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>The other day I came across this <a href="http://atratus.org/" title="Atratus project">project</a>. Looks an interesting idea. It is a project that lets you run unmodified Linux binaries on Windows. It is more similar to WINE than to for example coLinux. While I conceptually I understand how it would work at a low level, I am curious how it works with dynamically link executables. This is something I would like to test out when I have time.</p>
Media Tips
urn:uuid:9a9e12b6-c345-d2ef-e56e-6f146a992afb
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is an article about different media (and more specifically)
video files can be manipulated.</p>
<p>This is just for historical purposes as now almost everything can be
done using <code>ffmpeg</code> and the right options.</p>
<ul>
<li><a href="http://code.google.com/p/mp4v2/">libmp4v2</a> contains:
<ul>
<li>mp4art - to extract a picture (or coverart from mp4)</li>
<li>mp4info - to get meta data from mp4 streams</li>
<li>mp4tags - to set metadata and picture.</li>
</ul></li>
<li>qt-fastload to move index to the front and making mp4 streamable</li>
<li>When encoding:
<ul>
<li>Change max GOP or IDR to around 5 seconds.</li>
<li>2-pass avg bitrate: 800 or even 500...</li>
</ul></li>
</ul>
<h2 id="Concatenating+files%3A" name="Concatenating+files%3A">Concatenating files:</h2>
<h3 id="ffmpeg" name="ffmpeg">ffmpeg</h3>
<p>ffmpeg has a feature concat, like</p>
<pre><code>ffmpeg -i concat:"video1.ts|video2.ts"
</code></pre>
<p>There is also a "concat" video filter that may be useful. See
<a href="http://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20concatenate%20%28join,%20merge%29%20media%20files">http://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20concatenate%20%28join,%20merge%29%20media%20files</a></p>
<h3 id="gpac" name="gpac">gpac</h3>
<p>An alternative is <a href="http://gpac.wp.mines-telecom.fr/">gpac</a>. One command
it includes is MP4Box to concatenate MP4s</p>
<pre><code>mp4box -cat sbd0.mp4 -cat sbd1.mp4 -new sbd.mp4
</code></pre>
<h3 id="AviDemux" name="AviDemux">AviDemux</h3>
<p>Of course the avidemux GUI can append files.</p>
<h3 id="Final+notes" name="Final+notes">Final notes</h3>
<p>So far I have not been able to create a reliable media concat recipe.</p>
<h2 id="Media+Gain" name="Media+Gain">Media Gain</h2>
<p><a href="http://mp3gain.sourceforge.net/">mp3gain</a> can be used to normalize
volume levels (without re-encoding). Accomplish this by using
<a href="http://en.wikipedia.org/wiki/ReplayGain">ReplayGain</a> that needs to be
supported by player. (XBMC claims to supports this).</p>
Diskless Archlinux
urn:uuid:be1821db-5b05-76af-1715-cebe2700806d
2024-03-05T00:00:00+01:00
Alejandro Liu
<p><em>I am still to test this recipe</em></p>
<h2 id="Server+Configuration" name="Server+Configuration">Server Configuration</h2>
<p>First of all, we must install the following components:</p>
<ul>
<li>A DHCP server to assign IP addresses to our diskless nodes.</li>
<li>A TFTP server to transfer the boot image (a requirement of all PXE option roms).</li>
<li>A form of network storage (NFS or NBD) to export the Arch installation to the diskless node.</li>
</ul>
<p>Note: dnsmasq is capable of simultaneously acting as both DHCP and TFTP server.</p>
<h3 id="Network+storage" name="Network+storage">Network storage</h3>
<p>The primary difference between using NFS and NBD is while with both
you can in fact have multiple clients using the same installation,
with NBD (by the nature of manipulating a filesystem directly) you'll
need to use the copyonwrite mode to do so, which ends up discarding
all writes on client disconnect. In some situations however, this might
be highly desirable. Install nfs-utils on the server.</p>
<pre><code># pacman -Syu nfs-utils
</code></pre>
<h3 id="NFSv4" name="NFSv4">NFSv4</h3>
<p>You'll need to add the root of your arch installation to your NFS exports:</p>
<pre><code># vim /etc/exports
/srv/arch *(rw,fsid=0,no_root_squash,no_subtree_check)
</code></pre>
<p>Next, start NFS.</p>
<pre><code># systemctl start rpc-idmapd.service rpc-mountd.service
</code></pre>
<h3 id="NFSv3" name="NFSv3">NFSv3</h3>
<pre><code># vim /etc/exports
/srv/arch *(rw,no_root_squash,no_subtree_check,sync)
</code></pre>
<p>Next, start NFSv3.</p>
<pre><code># systemctl start rpc-mountd.service rpc-statd.service
</code></pre>
<p>Note: If you're not worried about data loss in the event of network
and/or server failure, replace sync with async--additional options
can be found in the NFS article.</p>
<h3 id="NBD" name="NBD">NBD</h3>
<p>Install nbd .</p>
<pre><code># pacman -Syu nbd
</code></pre>
<p>Configure nbd.</p>
<pre><code># vim /etc/nbd-server/config
[generic]
user = nbd
group = nbd
[arch]
exportname = /srv/arch.img
copyonwrite = false
</code></pre>
<p>Note: Set copyonwrite to true if you want to have multiple clients
using the same NBD share simultaneously; refer to man 5 nbd-server for
more details. Start nbd.</p>
<pre><code># systemctl start nbd.service
</code></pre>
<h2 id="Client+installation" name="Client+installation">Client installation</h2>
<p>Next we will create a full Arch Linux installation in a subdirectory on
the server. During boot, the diskless client will get an IP address from
the DHCP server, then boot from the host using PXE and mount this
installation as its root.</p>
<h3 id="Directory+setup" name="Directory+setup">Directory setup</h3>
<h4 id="NBD" name="NBD">NBD</h4>
<p>Create a sparse file of at least 1 gigabyte, and create a btrfs
filesystem on it (you can of course also use a real block device or
LVM if you so desire).</p>
<pre><code># truncate -s 1G /srv/arch.img
# mkfs.btrfs /srv/arch.img
# export root=/srv/arch
# mkdir -p "$root"
# mount -o loop,discard,compress=lzo /srv/arch.img "$root"
</code></pre>
<p>Note: Creating a separate filesystem is required for NBD but optional
for NFS and can be skipped/ignored.</p>
<h3 id="Bootstrapping+installation" name="Bootstrapping+installation">Bootstrapping installation</h3>
<p>Install devtools and arch-install-scripts , and run mkarchroot.</p>
<pre><code># pacman -Syu devtools arch-install-scripts
# mkarchroot -f "$root" base mkinitcpio-nfs-utils nfs-utils
</code></pre>
<p>Note: In all cases mkinitcpio-nfs-utils is still required--ipconfig used
in early-boot is provided only by the latter. Now the initramfs needs to
be constructed. The shortest configuration, <code>#NFSv3</code>, is presented as a
"base" upon which all subsequent sections modify as-needed.</p>
<h4 id="NFSv3" name="NFSv3">NFSv3</h4>
<pre><code># vim "$root/etc/mkinitcpio.conf"
MODULES="nfsv3"
HOOKS="base udev autodetect net filesystems"
BINARIES=""
</code></pre>
<p>Note: You'll also need to add the appropriate module for your ethernet
controller to the MODULES array. The initramfs now needs to be rebuilt;
the easiest way to do this is via arch-chroot .</p>
<pre><code># arch-chroot "$root" /bin/bash
(chroot) # mkinitcpio -p linux
(chroot) # exit
</code></pre>
<h4 id="NFSv4" name="NFSv4">NFSv4</h4>
<p>Trivial modifications to the net hook are required in order for NFSv4
mounting to work (not supported by nfsmount--the default for the net hook).</p>
<pre><code># sed s/nfsmount/mount.nfs4/ "$root/usr/lib/initcpio/hooks/net" | tee "$root/usr/lib/initcpio/hooks/net_nfs4"
# cp "$root/usr/lib/initcpio/install/{net,net_nfs4}"
</code></pre>
<p>The copy of net is unfortunately needed so it does not get overwritten
when mkinitcpio-nfs-utils is updated on the client installation. From
the base mkinitcpio.conf, replace the nfsv3 module with nfsv4, replace
net with net_nfs4, and add /sbin/mount.nfs4 to BINARIES.</p>
<h4 id="NBD" name="NBD">NBD</h4>
<p>The mkinitcpio-nbd package needs to be installed on the client.</p>
<pre><code># pacman --root "$root" --dbpath "$root/var/lib/pacman" -U mkinitcpio-nbd-0.4-1-any.pkg.tar
</code></pre>
<p>You will then need to append nbd to your HOOKS array after net; net
will configure your networking for you, but not attempt a NFS mount if
nfsroot is not specified in the kernel line.</p>
<h3 id="Client+configuration" name="Client+configuration">Client configuration</h3>
<p>In addition to the setup mentioned here, you should also set up your
hostname, timezone, locale, and keymap , and follow any other relevant
parts of the Installation Guide .</p>
<h2 id="Bootloader" name="Bootloader">Bootloader</h2>
<h3 id="Pxelinux" name="Pxelinux">Pxelinux</h3>
<p>Install syslinux .</p>
<pre><code># pacman -Syu syslinux
</code></pre>
<p>Copy the pxelinux bootloader (provided by the syslinux package) to the
boot directory of the client.</p>
<pre><code># cp /usr/lib/syslinux/pxelinux.0 "$root/boot"
# mkdir "$root/boot/pxelinux.cfg"
</code></pre>
<p>We also created the pxelinux.cfg directory, which is where pxelinux
searches for configuration files by default. Because we don't want to
discriminate between different host MACs, we then create the default
configuration.</p>
<pre><code># vim "$root/boot/pxelinux.cfg/default"
default linux
label linux
kernel vmlinuz-linux
append initrd=initramfs-linux.img ip=:::::eth0:dhcp nfsroot=10.0.0.1:/
</code></pre>
<p>NFSv3 mountpoints are relative to the root of the server, not fsid=0.
If you're using NFSv3, you'll need to pass 10.0.0.1:/srv/arch to
nfsroot. Or if you are using NBD, use the following append line:</p>
<pre><code>append ro initrd=initramfs-linux.img ip=:::::eth0:dhcp nbd_host=10.0.0.1 nbd_name=arch root=/dev/nbd0
</code></pre>
<p>Note: You will need to change nbd_host and/or nfsroot, respectively,
to match your network configuration (the address of the NFS/NBD server)
The pxelinux configuration syntax identical to syslinux; refer to the
upstream documentation for more information. The kernel and initramfs
will be transferred via TFTP, so the paths to those are going to be
relative to the TFTP root. Otherwise, the root filesystem is going to
be the NFS mount itself, so those are relative to the root of the NFS server.</p>
<pre><code># vim "$root/etc/fstab"
/dev/nbd0 / btrfs rw,noatime,discard,compress=lzo 0 0
</code></pre>
<h3 id="Program+state+directories" name="Program+state+directories">Program state directories</h3>
<p>You could mount /var/log, for example, as tmpfs so that logs from
multiple hosts don't mix unpredictably, and do the same with
/var/spool/cups, so the 20 instances of cups using the same spool
don't fight with each other and make 1,498 print jobs and eat an entire
ream of paper (or worse: toner cartridge) overnight.</p>
<pre><code># vim "$root/etc/fstab"
tmpfs /var/log tmpfs nodev,nosuid 0 0
tmpfs /var/spool/cups tmpfs nodev,nosuid 0 0
</code></pre>
<p>It would be best to configure software that has some sort of
state/database to use unique state/database storage directories for
each host. If you wanted to run puppet , for example, you could simply
use the %H specifier in the puppet unit file:</p>
<pre><code># vim "$root/etc/systemd/system/puppetagent.service"
[Unit]
Description=Puppet agent
Wants=basic.target
After=basic.target network.target
[Service]
Type=forking
PIDFile=/run/puppet/agent.pid
ExecStartPre=/usr/bin/install -d -o puppet -m 755 /run/puppet
ExecStart=/usr/bin/puppet agent --vardir=/var/lib/puppet-%H --ssldir=/etc/puppet/ssl-%H
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Puppet-agent creates vardir and ssldir if they do not exist. If neither
of these approaches are appropriate, the last sane option would be to
create a systemd generator that creates a mount unit specific to the
current host (specifiers are not allowed in mount units, unfortunately).</p>
OpenWRT web
urn:uuid:dfbecfe9-a885-2792-79a7-986e92927c6a
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Some useful tidbits to use when using the OpenWRT embedded web
server (uHTTPD).</p>
<h2>Embedded Lua</h2>
<p>uHTTPd supports running Lua in-process, which can speed up Lua CGI
scripts. It is unclear whether LuCI supports running in this embedded
interpreter. LuCI seems to work fine (if not better) with the embedded
Lua interpreter.</p>
<pre><code>root@OpenWrt:~# opkg install uhttpd-mod-lua
Installing uhttpd-mod-lua (18) to root...
Downloading http://downloads.openwrt.org/snapshots/trunk/ar71xx/packages/uhttpd-mod-lua_18_ar71xx.ipk.
Configuring uhttpd-mod-lua.
root@OpenWrt:~# uci set uhttpd.main.lua_prefix=/lua
root@OpenWrt:~# uci set uhttpd.main.lua_handler=/root/test.lua
root@OpenWrt:~# cat /root/test.lua
function handle_request(env)
uhttpd.send("HTTP/1.0 200 OKrn")
uhttpd.send("Content-Type: text/plainrnrn")
uhttpd.send("Hello world.n")
end
root@OpenWrt:~# /etc/init.d/uhttpd restart
root@OpenWrt:~# wget -qO- http://127.0.0.1/lua/
Hello world.
root@OpenWrt:~#</code></pre>
<p>Tested on Backfire 10.03.1 with uHTTPd 28.</p>
<h2>HTTPS Enable and Certificate Settings and Creation</h2>
<p>First of all, you need to install the <code>uhttpd-mod-tls</code> package in order to pull into the system the 'TLS plugin which adds HTTPS support to uHTTPd'. Then if listen_https is defined in the server configuration, the certificate and private key is missing. In this case (as of 10.03.1) you'll need to install the <code>luci-ssl</code> meta-package which in turn will pull also the <code>px5g</code> script. With this utility the init script will generate the appropriate certifcate and key files when the server is started for the first time, either by reboot or by manual restart. The <code>/etc/config/uhttpd</code> file contains in the end a section detailing the certificate and key files creation parameters:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Required</th>
<th>Default</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>days</td>
<td>integer</td>
<td>no</td>
<td>730</td>
<td>Validity time of the generated certificates in days</td>
</tr>
<tr>
<td>bits</td>
<td>integer</td>
<td>no</td>
<td>1024</td>
<td>Size of the generated RSA key in bits</td>
</tr>
<tr>
<td>country</td>
<td>string</td>
<td>no</td>
<td>DE</td>
<td>ISO country code of the certificate issuer</td>
</tr>
<tr>
<td>state</td>
<td>string</td>
<td>no</td>
<td>Berlin</td>
<td>State of the certificate issuer</td>
</tr>
<tr>
<td>location</td>
<td>string</td>
<td>no</td>
<td>Berlin</td>
<td>Location/city of the certificate issuer</td>
</tr>
<tr>
<td>commonname</td>
<td>string</td>
<td>no</td>
<td>OpenWrt</td>
<td>Common name covered by the certificate</td>
</tr>
</tbody>
</table>
<p>Those will be needed only once, at the next restart.</p>
<h2>Basic Authentication (httpd.conf)</h2>
<p>For backward compatibility reasons, uhttpd uses the old Busybox httpd
config file /<code>etc/httpd.conf</code> to define authentication areas and the
associated usernames and passwords. This configuration file is not in
UCI format and usually shipped or generated by external packages like
webif (X-Wrt). Authentication realms are defined in the format
<code>prefix:username:password</code> with one entry per line followed by a
newline.</p>
<ul>
<li>prefix is the URL part covered by the realm, e.g. /cgi-bin to request basic auth for any CGI program</li>
<li>username specifies the username a client has to login with</li>
<li>password defines the secret password required to authenticate</li>
</ul>
<p>The password can be either in plain text format, MD5 encoded or in the
form $p$user where user refers to an account in /etc/shadow or
/etc/passwd. A plain text password can be converted to MD5 encoding by
using the -m switch of the uhttpd executable:</p>
<pre><code>root@OpenWrt:~# uhttpd -m secret
$1$$ysVNzQc4CTMkp5daOdZ.3/</code></pre>
<p>If the $p$- format is used, uhttpd will compare the client provided
password against the one stored in the shadow or passwd database.<br />
URL decoding </p>
<p><strong>Note that this creates a empty salt!</strong></p>
<h2>URL decoding</h2>
<p>Like Busybox HTTPd, the URL decoding of strings on the command line is supported through the -d switch:</p>
<pre><code>root@OpenWrt:/# uhttpd -d "An%20URL%20encoded%20String%21%0a"
An URL encoded String!</code></pre>
Program Documentation
urn:uuid:711cca73-34c6-0b20-a7bf-2990a7fef1dd
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So these are my ideas on how to document projects. There are three types of documentation types:</p>
<ol>
<li>User guides<br />
Targetted and end-users of the software and people who want a brief overview.</li>
<li>Man pages<br />
Again targetted at end-users but also sysadmins. Usually to address a specific feature.</li>
<li>API level documentation/reference guide.<br />
Targetted at programmers enhancing, maintaining the software.</li>
</ol>
<p>To generate I would use different tools. In general we would like to embed with source code.</p>
<h2 id="User+Guides" name="User+Guides">User Guides</h2>
<p>Either as a stand alone document or embedded in the code. My default
tool is to use <a href="http://en.wikipedia.org/wiki/Markdown">Markdown</a> as can
be easily be converted to HTML or PDF as needed.</p>
<h2 id="Man+pages" name="Man+pages">Man pages</h2>
<p>Use <code>manify</code>. Embedded in the source code.</p>
<h2 id="API+reference+documentation" name="API+reference+documentation">API reference documentation</h2>
<p>We need generation tools. So the candidates are:</p>
<ul>
<li>C :
<ul>
<li><a href="http://www.khm.de/~rudi/ZehDok/">zehdok</a></li>
<li><a href="https://github.com/angelortega/mp_doccer">mp_doccer</a></li>
</ul></li>
<li>php : Multiples
<ul>
<li><a href="http://www.phpdoc.org/">phpdoc</a>: The main one</li>
<li><a href="https://github.com/peej/phpdoctor">peej's phpdoctor</a></li>
<li><a href="http://www.apigen.org/">ApiGen</a> : more modern alternative.</li>
<li><a href="https://github.com/victorjonsson/PHP-Markdown-Documentation-Generator">PHP Markdown doc generator</a></li>
</ul></li>
<li>perl : <a href="http://juerd.nl/site.plp/perlpodtut">pod</a></li>
<li>python : <a href="http://docs.python.org/2/library/pydoc.html">pydoc</a></li>
<li>tcl : <a href="http://tcl.jtang.org/tcldoc/tcldoc/">tcldoc</a> or <a href="http://www.doxygen.org">doctools</a></li>
<li>java : <a href="http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html">javadoc</a></li>
<li>javascript : <a href="http://code.google.com/p/jsdoc-toolkit/">jsdoc-toolkit</a></li>
<li>Shell script: <a href="https://github.com/alejandroliu/ashlib/blob/master/shdoc">shdoc</a></li>
</ul>
<p>Multi languages:</p>
<ul>
<li><a href="http://www.doxygen.org">doxygen</a>: C, Objective-C, C#, PHP, Java, Python, IDL (Corba, Microsoft, and UNO/OpenOffice flavors), Fortran, VHDL, Tcl, and to some extent D.</li>
<li><a href="http://rfsber.home.xs4all.nl/Robo/?">ROBODoc</a>: Virtually anything.</li>
</ul>
Git Tutorials
urn:uuid:7f3a0414-3f64-6459-f95f-31938609b9d9
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Reference for Git tutorials</p>
<ul>
<li><a href="http://linux.yyz.us/git-howto.html">http://linux.yyz.us/git-howto.html</a></li>
<li><a href="http://git.or.cz/course/svn.html">http://git.or.cz/course/svn.html</a></li>
<li><a href="http://spheredev.org/wiki/Git_for_the_lazy">http://spheredev.org/wiki/Git_for_the_lazy</a></li>
<li><a href="http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html">http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html</a></li>
<li><a href="http://www.kernel.org/pub/software/scm/git/docs/everyday.html">http://www.kernel.org/pub/software/scm/git/docs/everyday.html</a></li>
<li><a href="https://git.wiki.kernel.org/index.php/GitSubmoduleTutorial">https://git.wiki.kernel.org/index.php/GitSubmoduleTutorial</a></li>
</ul>
Remote Bridging
urn:uuid:054c22b3-b442-9648-b876-c96dddab08c4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Sometimes we need to connect two or more geographically distrubuted ethernet networks to one broadcast domain. There can be two different office networks of some company which uses smb protocol partially based on broadcast network messages. Another example of such situation is computer cafes: a couple of computer cafes can provide to users more convinient environment forr playing multiplayer computer games without dedicated servers. Both sample networks in this article need to have one *nix server for bridging. Our networks can be connected by any possible hardware that provides IP connection between them.</p>
<h1>Connecting Two Remote Local Networks With Transparent Bridging Technique</h1>
<h2>Short description</h2>
<p>In described configuration we are connecting two remote LANs to make them appearing as one network with 192.168.1.0/24 address space (however physically, presense of bridges in network configuration is not affecting IP protocol and is fully transparent for it, so you can freely select any address space). Both of the bridging servers has two network interfaces: one (as eth0 in our example) connested to the LAN, and second (eth1) is being used as transport to connect networks. When ethernet tunnel between gateways in both networks will be bringed up we will connect tunnel interfaces with appropriate LAN interfaces with bridge interfaces. Schematically this configuration can be following:</p>
<pre><code> +-------+ +-------+
| br0 | | br0 |
+-------+ +-------+
| | | |
Network 1 | | | | Network 2
----------eth0 tap0---eth1........eth1---tap0 eth0---------------</code></pre>
<h1>Setting Up Bridging Servers</h1>
<p><em>Notice: This article describes Debian GNU/Linux servers setup. If you are using another distribution, there can be some differences in network configuration and package management, but the main idea of described actions will be the same.</em> First of all, we need to check if tun and bridge modules is not included in current kernel. If they are not includen, we need to rebuild kernel with CONFIG_TUN and CONFIG_BRIDGE options. Next, we need to create tunnel device file for our tunnel:</p>
<pre><code># cd /dev
# ./MAKEDEV tun
# mkdir misc
# ln -s /dev/net /dev/misc/net</code></pre>
<p><em>Notice: Last command is needed to make vtun work, because authors build for debian is looking for tunnel device driver at /dev/misc/net/tun.</em> To create ethernet tunnel between bridging servers we will use <strong>vtun</strong> software. When <code>vtun</code> will be installed, we will need to select one of the bridging servers as master and second server will be slave and appropriately change vtund-start.conf and vtund.conf file in /etc/ on buth servers. Complete config files for master is following.</p>
<pre><code>/etc/vtund-start.conf
----cut-here------------------------------------
--server-- 5000
----cut-here------------------------------------
/etc/vtund.conf
----cut-here------------------------------------
options {
port 5000; # Listen on this port.
# Syslog facility
syslog daemon;
# Path to various programs
ifconfig /sbin/ifconfig;
route /sbin/route;
firewall /sbin/iptables;
ip /sbin/ip;
}
default {
compress no;
encrypt no;
speed 0;
}
rembridge {
passwd Pa$$Wd;
type ether;
proto udp;
keepalive yes;
compress no;
encrypt yes;
up {
# Connection is Up
ifconfig "%% up";
program "brctl addif br0 %%";
};
down {
# Connection is Down
ifconfig "%% down";
};
}
----cut-here------------------------------------</code></pre>
<p>Slave server config files is following:</p>
<pre><code>/etc/vtund-start.conf
----cut-here------------------------------------
rembridge 10.1.1.1 -p
----cut-here------------------------------------</code></pre>
<p><em>Notice: In this example 10.1.1.1 is transport address of master server.</em></p>
<pre><code>/etc/vtund.conf
----cut-here------------------------------------
options {
# Path to various programs
ifconfig /sbin/ifconfig;
route /sbin/route;
firewall /sbin/iptables;
}
korsar {
pass Pa$$Wd; # Password
type ether; # Ethernet tunnel
up {
# Connection is Up
ifconfig "%% up";
program "brctl addif br0 %%"
};
down {
# Connection is Down
ifconfig "%% down";
};
}
----cut-here------------------------------------</code></pre>
<p>To bring up bridge between LAN ethernet interface and our newly created tunnel interface we need to create bridge interface. To complete this task we will add br0 interface description to /etc/network/interfaces file:</p>
<pre><code>auto br0
iface br0 inet static
address 192.168.1.199
netmask 255.255.255.0
bridge_ports eth0</code></pre>
<p><em>Notice: IP-addresses on both sides of our bridge must be unique in both networks. eth0 is LAN interface.</em> Now, we need to bring this interface up:</p>
<pre><code># ifup br0</code></pre>
<p>When br0 interface will be created, we will be able to start vtun. # /etc/init.d/vtund restart If everything was done correctly, we will see following results on both sersers (br0 and tap0 interfaces):</p>
<pre><code># ifconfig tap0
tap0 Link encap:Ethernet HWaddr 00:FF:B2:91:CA:DE
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:701818 errors:0 dropped:0 overruns:0 frame:0
TX packets:405939 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:975889241 (930.6 MiB) TX bytes:44704104 (42.6 MiB)
# ifconfig br0
br0 Link encap:Ethernet HWaddr 00:02:44:2A:03:30
inet addr:192.168.1.199 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2660 errors:0 dropped:0 overruns:0 frame:0
TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:239368 (233.7 KiB) TX bytes:2338 (2.2 KiB)
#</code></pre>
<p>If we need to see current state of bridge interface, we can use brctl tool:</p>
<pre><code># brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.0002442a0330 no eth0
tap0
#</code></pre>
<p>When all of described steps will be completed, our computers in both networks will be able to communicate with each other. IP addresses on bridge interfaces can be used for troubleshooting network connection. And last, if you need, you can turn on compression or enrtyption of data within created tunnel.</p>
Network wiring notes - 8P8C / RJ45
urn:uuid:98e90f61-1c12-cb31-fb03-600754885757
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>What you were probably looking forT568A/B (10-BASE-T and 100-BASE-TX):</p>
<p>With pin positions are counted from <em>left to right</em> with the <em>contacts facing</em> you (clip on the back) and pointing up <em>(cable coming out the bottom):</em></p>
<table>
<thead>
<tr>
<th>Color (568B)</th>
<th>Pin</th>
<th>Color(568A)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Orange-white</td>
<td>1</td>
<td>Green-white</td>
</tr>
<tr>
<td>Orange</td>
<td>2</td>
<td>Green</td>
</tr>
<tr>
<td>Green-white</td>
<td>3</td>
<td>Orange-white</td>
</tr>
<tr>
<td>Blue</td>
<td>4</td>
<td>Blue</td>
</tr>
<tr>
<td>Blue-white</td>
<td>5</td>
<td>Blue-white</td>
</tr>
<tr>
<td>Green</td>
<td>6</td>
<td>Orange</td>
</tr>
<tr>
<td>Brown-white</td>
<td>7</td>
<td>Brown-white</td>
</tr>
<tr>
<td>Brown</td>
<td>8</td>
<td>Brown</td>
</tr>
</tbody>
</table>
<p><img src="/images/2013/wiring1.jpg" alt="wiring1" /></p>
<p>Cut the outer insulation and order the wires (right), cut for equal length (not shown), and insert into plug (left)</p>
<p><img src="/images/2013/wiring2.jpg" alt="wiring1" /></p>
<p>Check that the wires go to the end of the plug by seeing if you can see each wire at the end, preferably see its copper reflecting (left).</p>
<p>Then use the crimping tool (shoves the pins into the wires, and fixes the wire in the plug.</p>
<p><em>100Mbit ethernet</em> (Fast Ethernet, FE) uses only two pairs - pins 1, 2, 3, and 6, which in the standard wiring are the orange and green wire pairs.</p>
<p>This means you could use the other wires for different things - e.g. one link and a phone (non-conflicting if you use the stand pins for each), or some more custom combination such as two links though one wire.</p>
<p><em>Gigabit ethernet</em> uses all four pairs, so has no creative options.</p>
<h1>On crossover cables</h1>
<p>Given the two plug colorings in 568-type wiring, cables wired with these can be:</p>
<ul>
<li>A <em>straight cable</em>, 568B-568B (or the functionally equivalent 568A-568A, but in terms of colors the B-B variant seems to be used everywhere, probably to avoid confusion)</li>
<li>A <em>crossover cable</em> (also <em>patch cable</em>) has 568A on one end and 568B on the other. (crossing is also effectively done by a switch or hub, so you can use straight cables except in cases where you don't use switches. Crossovers can be useful for direct computer-computer connections).</li>
</ul>
<p>Gigabit ethernet doesn't need crossovers -- it decided to handle that case inside the NIC and switches rather than have you do it the cable. You use straight cables everywhere (NIC-switch-NIC and NIC-NIC).</p>
<p>Gigabit crossovers are rumored to exist (crossing blue and brown in addition to orange and green), but they are unnecessary.</p>
<h1>On Loopbacks</h1>
<p>Loopbacks connect a port to itself. This can be used to test whether a long cable and/or its wallplug is broken, and whether a switch/router port is broken (or perhaps dirty or corroded), both just by seeing whether the link light comes on.</p>
<p>Connect:</p>
<ul>
<li>Pin 1 to 3</li>
<li>Pin 2 to 6</li>
<li>Pin 4 to 7 (for a gBit loopback)</li>
<li>Pin 5 to 8 (for a gBit loopback)</li>
</ul>
<p>(If you're wiring a plug as a loopback, make sure you're not confused about which end is pin is pin 1). To create a loopback from a plug-with-cable you cut (that was wired according to 568A or 568B), this means:</p>
<pre><code>Orange-white to Green-white
Orange to Green
Blue to Brown-white (for a gBit loopback)
Brown to Blue-white (for a gBit loopback)</code></pre>
<h1>gBit loopback is a limited concept:</h1>
<p>Gigabit NICs have crosstalk detection (detects how much signal interferes onto other wires), and will likely decide that the loopback is an extreme amount of crosstalk - any may not show link. Meaning it's often only useful on NICs which let you disable crosstalk detection. Gigabit switches may behave differently (but I'm not sure what the spec says or the real-world variation is)</p>
<p>More notes on ethernet wiring</p>
<p>The wiring used on 10Mbit, 100Mbit (specifically 10-BASE-T and 100-BASE-TX) ethernet over 8P8C (informally RJ45) plugs is defined by <em>TIA/EIA-568-B</em>, which define two plug wiring alternatives, 568A and 568B. Notice the lack of dashes; 568-B is the standard they are part of, 568-A a completely different standard (yes, that naming is stupidly confusing).</p>
<p>Note that both 10Mbit and 100Mbit networking use only pair 2 and 3 (orange and green) in the standard (The blue pair is pair 1, orange is pair 2, green is pair 3, and brown is pair 4.)</p>
<p>This means that Americans or anyone else using 4P/6P-style phone connectors can use fully wired cables (most are fully wired - relatively few (cheaper) cables are only two-pair for only Ethernet) to wire their house/company and have the same sockets be usable to plug in phones, a computer (or both with a trivial splitter). Various companies can use use this to make their wiring simpler.</p>
<h1>Gigabit ethernet</h1>
<p>GBit ethernet can use cables wired 568-style, preferably rated Cat5e, or better.</p>
<p>Specifically, you want straight wiring and four-pair cable. Most older cables are, so can be used at gBit speeds, as 1000-BASE-T uses all four pairs instead of just two.</p>
<p>(If you press your own plugs, it is suggested that you keep the untwisted length as small as possible, to minimize near-end crosstalk)</p>
<p>Some implications:</p>
<ul>
<li>you can't do the phone/networking split mentioned above</li>
<li>on-the-cheap two-pair cables will work, but only because the NICs fall back to 100mbit</li>
<li>you can mix 10/100/1000 in your network, by replacing switches (handy for partial/gradual upgrades), without having to wire about the cabling.</li>
</ul>
<p>...as long as the cable is rated Cat5e (or better)</p>
<h1>On cable standards (Cat5, etc.)</h1>
<ul>
<li>Cat5
<ul>
<li>rarely seen - it has fallen out of favour and is barely sold</li>
<li>(...but your company may still be wired with it)</li>
<li>regularly and informally refers to Cat5a</li>
</ul></li>
<li>Cat5a
<ul>
<li>Currently still quite common</li>
<li>100mBit, 1gBit at <100m</li>
</ul></li>
<li>Cat6
<ul>
<li>100mBit, 1gBit at <100m</li>
<li><55m for 10gBit (less if many are bundled)</li>
</ul></li>
<li>Cat6a
<ul>
<li>10GBit at <100m</li>
</ul></li>
<li>Cat7 - stricter about crosstalk (pairs individually insulated)</li>
</ul>
<p>Any cabling that is not shielded will crosstalk, meaning permissible distances are lower when many are bundled (relevant to company wiring), or be likely to get a lot of outside interference.</p>
<p>The Shiny Special Expensive cables sold in the sort of computer shops that only sell things that come in plastic boxes are generally not necessary, particularly not on the few-meter cables for your home LAN.</p>
<p>For example, Cat6 was made for 10gBit, most single computers don't have a source of data to actually use that speed, and even if they had, it's hard to get a 10gBit switch.</p>
<p>Companies may want Cat6 (or perhaps Cat6a) for future compatibility, to be able to use gBit now and assuming that nothing replaces copper before the next update.</p>
<p>Cat7 can go beyond 10gBit at short-ish distances, but chances are you won't be needing that any time soon. Only data centers might care.</p>
<h1>Naming pendantics and telephony</h1>
<p>When we say RJ45, we often mean something like "Ethernet wiring on an 8P8C plug."</p>
<p>RJ is the group of plugs that can be described by their positions and connectors, such as 8P8C in Ethernet, while RJ45 actually refers to a specific telephone wiring on the 8P-style plug (probably the most common one among several), while 8P8C refers to that plug itself and no specific wiring. Regardless, most people call the plug RJ45, regardless of wiring.</p>
<p>Plugs may have fewer actually present conductors than they have positions, so 8P2C, 8P4C, 8P6C, 8P8C, 6P2C, 6P4C, 6P6C, 4P2C, 4P4C all exist.</p>
<p>When there are less connectors than positions, they are in the middle positions; RJ-style wiring is from the middle out.</p>
<p>For most of us, the is interesting only in that you can plug a phone with 6P plugs into a 8P (ethernet-plus-phone) socket and have the phone work - the clip aligns the plug in the middle.</p>
<p>If more more than the middle two wires are used in telephone wiring, they carry either power, or a second (RJ14) or even third (RJ25) telephone line on the same wire, but consumers rarely see this type of phone wiring)}}</p>
<p>There are exceptions to the 'always start in the middle', but they tend to be intentionally working around RJ-style wiring.</p>
<p>The most common use of 6P outside of the US is probably phone wiring according to RJ11, which often use just a single pair in the middle. In the US, 8P connectors with the RJ45 phone wiring is common.</p>
Native Kerberos Authentication with SSH
urn:uuid:a49b5e99-1930-93e5-99c9-91833d549d70
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article is about integrating OpenSSH in a kerberos environment.
Allthough OpenSSH can provide passwordless logins (through Public/Private
keys), it is not a true SSO set-up. This article makes use of
Kerberos TGT service to implement a true SSO configuration for OpenSSH.</p>
<h2 id="Pre-requisites" name="Pre-requisites">Pre-requisites</h2>
<p>First off, you'll need to make sure that the OpenSSH server's Kerberos configuration (in <code>/etc/krb5.conf</code>) is correct and works, and that the server's keytab (typically <code>/etc/krb5.keytab</code>) contains an entry for <code>host/fqdn@REALM</code> (case-sensitive). I won't go into details on how this is done again; instead, I'll refer you to any one of the recent Kerberos-related articles (like <a href="http://blog.scottlowe.org/2006/08/08/linux-active-directory-and-windows-server-2003-r2-revisited/">this one</a>, <a href="http://blog.scottlowe.org/2006/08/15/solaris-10-and-active-directory-integration/">this one</a>, or <a href="http://blog.scottlowe.org/2006/08/21/more-on-kerberos-authentication-against-active-directory/">even this one</a>). Just be sure that you can issue a <code>kinit -k host/fqdn@REALM</code> and get back a Kerberos ticket without having specify a password. (This tells you that the keytab is working as expected.)</p>
<h2 id="Configuring+the+SSH+Server" name="Configuring+the+SSH+Server">Configuring the SSH Server</h2>
<p>Configure `/etc/ssh/sshd_config with the following:</p>
<pre><code> KerberosAuthentication yes
KerberosTicketCleanup yes
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
UseDNS yes
UsePAM no
</code></pre>
<p>If <code>UseDNS</code> is set to <code>Yes</code>, the ssh server does a reverse host lookup to find the name of the connecting client. This is necessary when host-based authentication is used or when you want last login information to display host names rather than IP addresses. <em>Note:</em> Some ssh sessions stall when performing reverse name lookups because the DNS servers are unreachable. If this happens, you can skip the DNS lookups by setting <code>UseDNS</code> to <code>no</code>. If <code>UseDNS</code> is not explicitly set in the <code>/etc/ssh/sshd_config</code> file, the default value is <code>UseDNS yes</code>.</p>
<h2 id="Configuring+the+SSH+Client" name="Configuring+the+SSH+Client">Configuring the SSH Client</h2>
<p>Edit <code>/etc/ssh/ssh_config</code>, and change the file accordingly. For example, we want to enable Kerberos mechanism for all Hosts:</p>
<pre><code> Host *
....
GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes
</code></pre>
<p>or to enable to specific domains:</p>
<pre><code>Host *.example.com
GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes
</code></pre>
<p>This limits GSSAPI authentication to only those hosts in the <code>example.com</code> domain. Modify the domain to be the appropriate domain for your network.</p>
<h2 id="Testing+the+Configuration" name="Testing+the+Configuration">Testing the Configuration</h2>
<p>Obtain a valid Kerberos ticket <code>kinit username</code> from the command line. Once you have a ticket, you should be able to simply <code>ssh fqdn.of.server</code> and you will get logged in, without getting prompted for a password. If you get prompted for a password, go back and double-check your keytab, your SSH daemon configuration, and the time configuration on your OpenSSH server. Because Kerberos requires time synchronization, differences of greater than 5 minutes will cause the authentication to fail.</p>
My new 0ink.net site
urn:uuid:6eb8ae80-18f8-cfaa-507e-1df26e4e30fc
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So one weekend went by and managed to finish up my <code>0ink.net</code> web site. So now I have:</p>
<ul>
<li><a href="http://wordpress.org/">wordpress</a><br />
For main content.</li>
<li><a href="http://tt-rss.org/">tt-rss</a><br />
This is my answer to Google's shutdown of the Reader service.</li>
<li>Automated backups<br />
Through my own custom scripts.</li>
<li>New e-mail server</li>
<li><a href="http://www.manyfish.co.uk/sitecopy/">sitecopy</a><br />
To manage the web software updates.</li>
<li><a href="http://www.cloudflare.com/">CloudFlare</a><br />
Reverse proxy and web accelerator.</li>
</ul>
Mirroring a Gitorious repository to GitHub
urn:uuid:e3c23958-96cc-4ff1-23da-8e34fe3884c0
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>There is nothing special with <a href="http://github.com/">GitHub</a> and
<a href="http://gitorious.com/">Gitorious</a> here. This technique would work
exactly the same the other way around or with other servers.</p>
<h2 id="In+a+nutshell" name="In+a+nutshell">In a nutshell</h2>
<pre><code># Inital setup
git clone --mirror git://gitorious.org/weasyprint/weasyprint.git weasyprint
GIT_DIR=weasyprint git remote add github git@github.com:SimonSapin/WeasyPrint.git
# In cron
cd /path/to/project && git fetch -q && git push -q --mirror github
</code></pre>
<h2 id="How+it+works" name="How+it+works">How it works</h2>
<p>Mirroring with Git is pretty easy: just pull from or push to another
repository. <a href="http://github.com/">GitHub</a> and
<a href="http://gitorious.com/">Gitorious</a> allow you to push to them or pull
from them, but you can not make them push to somewhere else. You need
something in the middle. Digging a bit in the man pages tells you that
the magic option is <code>--mirror</code>. First, clone your "source" repository:</p>
<pre><code>git clone --mirror git://gitorious.org/weasyprint/weasyprint.git weasyprint
</code></pre>
<p><code>--mirror</code> implies <code>--bare</code>. This repository is not for working, you
don't want it to have a working directory. More importantly, <code>--mirror</code>
sets up the origin remote so that git fetch will directly fetch into
local branches without doing any merge. It will force the update if
the remote history has diverged from the local one.</p>
<pre><code>git fetch
</code></pre>
<p>Now our local repository is an exact mirror of what we have on
<a href="http://gitorious.com/">Gitorious</a>. Let's push it to <a href="http://github.com/">GitHub</a>:</p>
<pre><code>git remote add github git@github.com:SimonSapin/WeasyPrint.git
git push --mirror github
</code></pre>
<p>The <code>--mirror</code> option for git push is similar to that for git clone:
instead of pushing just a branch, it says that all references (branches,
tags, -) should be the same on the remote end as they are here, even if
it means forced updates or removing. Now our
<a href="http://gitorious.com/">GitHub</a> repository also is a mirror. Let's
update it every hour with cron. The <code>-q</code> option says to suppress normal
output but keep error messages, which cron should send you by email if
your server is properly configured.</p>
<pre><code>42 * * * * cd /path/to/weasyprint && git fetch -q && git push -q --mirror github
</code></pre>
<h4 id="Warning%3A+--mirror+is+like+--force" name="Warning%3A+--mirror+is+like+--force">Warning: <code>--mirror</code> is like <code>--force</code></h4>
<p>Both <code>--mirror</code> options are kind of like <code>--force</code> in that you can
lose data if you're not careful. It will make exact mirrors, no question
asked. If you push changes to the mirror's destination, they will be
overwritten/removed on the next update if they are not in the mirror's
source.</p>
<p>Original article by Simon Sapin: <a href="http://exyr.org/2011/git-mirrors/">http://exyr.org/2011/git-mirrors/</a></p>
Kerberos howtos
urn:uuid:72e9aa49-6b5f-bd00-ecd4-2dd8f18b1999
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Kerberos is a network authentication protocol which works on the basis
of "tickets" to allow nodes communicating over a non-secure network to
prove their identity to one another in a secure manner. (Source
<a href="http://en.wikipedia.org/wiki/Kerberos_(protocol)">Kerberos_(protocol)</a> )</p>
<h2 id="Backups" name="Backups">Backups</h2>
<p>Create backup:</p>
<pre><code>kdb5_util dump _dump_file_
</code></pre>
<p>Restore from dump file:</p>
<pre><code>kdb5_util load _dump_file_
</code></pre>
<h2 id="Master%2FSlave+replication" name="Master%2FSlave+replication">Master/Slave replication</h2>
<p>Initial set-up:</p>
<pre><code>(master)# kdb5_util dump _dump_file_
(master)# kprop -d -f _dump_file_ _slave_
</code></pre>
<p>In <code>crontab</code> on master:</p>
<pre><code>krb5_util dump _dump_file_
kprop -f _dump_file_ _slave_
</code></pre>
<h2 id="kadmin+command" name="kadmin+command">kadmin command</h2>
<p>From command line:</p>
<pre><code>kadmin.local -q 'cmd'
</code></pre>
<ul>
<li>listprincs - list principals</li>
<li>ank <em>principal</em> - new principal (input: password)</li>
<li>delprinc <em>principal</em> - delete principal (input: yes/no)</li>
<li>ank -randkey host/<em>fqdn</em>@REALM - create a service key</li>
<li>ktadd -k <em>filename</em> host/<em>fqdn</em>@REALM export key to keytab file.</li>
</ul>
<p>Save keytab in <code>/etc/krb5.keytab</code> and</p>
<pre><code>chown root:root /etc/krb5.keytab
chmod 400 /etc/krb5.keytab
</code></pre>
<h2 id="Logging" name="Logging">Logging</h2>
<p>To turn logging on, add this section to <code>/etc/krb5.conf</code> (adapt the
file paths to your likings):</p>
<pre><code> [logging]
default = FILE:/var/log/krb5.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
</code></pre>
<h2 id="Merging+%28or+editing%29+a+keytab+file" name="Merging+%28or+editing%29+a+keytab+file">Merging (or editing) a keytab file</h2>
<p>Merging or editing keytabs is done through the <strong>ktutil</strong> command.
Suppose we have two keytabs, keytab1 and keytab2, each having their own
set of keys, and we would like to merge the two keytabs in one (or
create a new keytab containing specific keys). The operation is done
through the <strong>ktutil</strong> shell, with <strong>rkt</strong> and <strong>write_kt</strong> commands,
and optionally <strong>delent</strong> if you want to delete some entities. Example:</p>
<pre><code> # ktutil
</code></pre>
<p>Read content of keytab1:</p>
<pre><code> ktutil: rkt keytab1
ktutil: list
slot KVNO Principal
---- ---- -------------------------------------------------------------
1 3 <principal and key of keytab1>
2 3 <principal and key of keytab1>
</code></pre>
<p>Now, we will read the content of keytab2:</p>
<pre><code> ktutil: rkt keytab2
ktutil: list
slot KVNO Principal
---- ---- -----------------------------------------------------------
1 3 <principal and key of keytab1>
2 3 <principal and key of keytab1>
3 2 <principal and key of keytab2>
4 2 <principal and key of keytab2>
</code></pre>
<p>Save this content in a temporary keytab:</p>
<pre><code> ktutil: write_kt /tmp/krb5.keytab
</code></pre>
<p>This utility is used to duplicate and tweak keytab entries (as its
name implies), and remove the need of exporting the keys out of the KDC
twice or more (simultaneously avoiding KVNO's increment).</p>
<h2 id="OpenWRT+recipes" name="OpenWRT+recipes">OpenWRT recipes</h2>
<h3 id="Packages" name="Packages">Packages</h3>
<h4 id="Server" name="Server">Server</h4>
<ul>
<li><code>krb5-server</code>
<ul>
<li><code>krb5-libs</code> (dependency of <strong>krb5-server</strong>)</li>
</ul></li>
</ul>
<h4 id="Client" name="Client">Client</h4>
<ul>
<li><code>krb5-client</code></li>
</ul>
<h3 id="Configuration" name="Configuration">Configuration</h3>
<p>Create the file <code>/etc/krb5.conf</code> with the following credentials. Example:</p>
<pre><code>[libdefaults]
default_realm = YOURDOMAIN.ORG
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = yes
[realms]
YOURDOMAIN.ORG = {
kdc = server_address_of_this_machine:88
admin_server = server_address_of_this_machine:749
default_domain = yourdomain.org
}
[domain_realm]
.yourdomain.org = YOURDOMAIN.ORG
yourdomain.org = YOURDOMAIN.ORG
</code></pre>
<p>Replace <code>YOURDOMAIN.ORG</code> / <code>yourdomain.org</code> with the domain name of
your domain the server should act for (names must be specified in
UPPER- / lowercase as shown above). Replace <code>server_address_of_this_machine</code>
with the host name/IP adress of this server you're setting up.</p>
<h4 id="Starting+the+server" name="Starting+the+server">Starting the server</h4>
<p>Start the server by issuing</p>
<pre><code>/etc/init.d/krb5kdc start
</code></pre>
<p>This should create the <code>/etc/krb5kdc/</code> directory with the following files</p>
<pre><code>-rw------- 1 root root 8192 Feb 13 11:17 principal
-rw------- 1 root root 8192 Feb 13 09:12 principal.kadm5
-rw------- 1 root root 0 Feb 13 09:12 principal.kadm5.lock
-rw------- 1 root root 0 Feb 13 11:17 principal.ok
</code></pre>
<p>In case you don't get any error messages check your server by logging
on with <code>kadmin.local</code> In case everything works well you will see the
following message</p>
<pre><code>root@bridge:~# kadmin.local
Authenticating as principal xxxxxxx/admin@YOURDOMAIN.ORG with password.
kadmin.local:
</code></pre>
<h4 id="Start+on+boot" name="Start+on+boot">Start on boot</h4>
<p>To enable/disable automatic start on boot:</p>
<pre><code>/etc/init.d/krb5kdc enable
</code></pre>
<p>this simply creates a symlink: <code>/etc/rc.d/S60krb5kdc</code> ? <code>/etc/init.d/krb5kdc</code></p>
<pre><code>/etc/init.d/krb5kdc disable
</code></pre>
<p>this removes the symlink again</p>
<h2 id="References" name="References">References</h2>
<p>See Also:</p>
<ul>
<li><a href="http://www.kerberos.org/software/adminkerberos.pdf">http://www.kerberos.org/software/adminkerberos.pdf</a><br />
Refer to pages 16/17 for testing procedures.</li>
</ul>
Emacs Cheat Sheet
urn:uuid:8cfd6fc8-762c-de92-5da9-4610e243094b
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Quick reference article for how to use Emacs. Yes, it is really
old skool!</p>
<h2 id="Cursor+Motion" name="Cursor+Motion">Cursor Motion</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>Cursor Motion</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-f</td>
<td>Forward one character</td>
</tr>
<tr>
<td>C-b</td>
<td>Backward one character</td>
</tr>
<tr>
<td>C-n</td>
<td>Next line</td>
</tr>
<tr>
<td>C-p</td>
<td>Previous line</td>
</tr>
<tr>
<td>C-a</td>
<td>Beginning of line</td>
</tr>
<tr>
<td>C-e</td>
<td>End of line</td>
</tr>
<tr>
<td>C-v</td>
<td>Next screenful</td>
</tr>
<tr>
<td>M-v</td>
<td>Previous screenful</td>
</tr>
<tr>
<td>M-<</td>
<td>Beginning of buffer</td>
</tr>
<tr>
<td>M-></td>
<td>End of buffer</td>
</tr>
<tr>
<td>C-s</td>
<td>Search forward incrementally</td>
</tr>
<tr>
<td>C-r</td>
<td>Reverse search incrementally</td>
</tr>
<tr>
<td>C-u C-s</td>
<td>Reg-Exp Search forward incrementally</td>
</tr>
<tr>
<td>C-u C-r</td>
<td>Reg-Exp Reverse search incrementally</td>
</tr>
<tr>
<td>C-x C-x</td>
<td>Swap mark and cursor</td>
</tr>
<tr>
<td>C-Space</td>
<td>Set mark</td>
</tr>
<tr>
<td>C-l</td>
<td>Cursor to the middle of the screen</td>
</tr>
<tr>
<td>M-}</td>
<td>Forward one paragraph</td>
</tr>
<tr>
<td>M-{</td>
<td>Backward one paragraph</td>
</tr>
</tbody>
</table>
<h2 id="Text+editing" name="Text+editing">Text editing</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>Editing</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-q</td>
<td>Literal command</td>
</tr>
<tr>
<td>C-d</td>
<td>Delete next character</td>
</tr>
<tr>
<td>Backspc</td>
<td>Delete previous character</td>
</tr>
<tr>
<td>M-%</td>
<td>Query string replacement</td>
</tr>
<tr>
<td>M-d</td>
<td>Delete next word</td>
</tr>
<tr>
<td>M-Bcksp</td>
<td>Delete previous word</td>
</tr>
<tr>
<td>C-k</td>
<td>Kill to end of line (delete to end of line)</td>
</tr>
<tr>
<td>C-w</td>
<td>Cut region</td>
</tr>
<tr>
<td>M-w</td>
<td>Copy region</td>
</tr>
<tr>
<td>C-y</td>
<td>Yank most recent cut/copy (paste command)</td>
</tr>
<tr>
<td>M-y</td>
<td>Replace yanked text with previously cut/copy text (only works immediatly after C-y or another M-y)</td>
</tr>
<tr>
<td>C-x u</td>
<td>undo</td>
</tr>
<tr>
<td>C-u ##</td>
<td>Repeat the next command</td>
</tr>
</tbody>
</table>
<h2 id="File+Commands" name="File+Commands">File Commands</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>Files</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-x C-f</td>
<td>Open a file</td>
</tr>
<tr>
<td>C-x C-s</td>
<td>Save buffer to file</td>
</tr>
<tr>
<td>C-x C-w</td>
<td>Write buffer to file (Save As)</td>
</tr>
<tr>
<td>C-x C-c</td>
<td>Exit Emacs</td>
</tr>
<tr>
<td>C-x s</td>
<td>Save all buffers</td>
</tr>
<tr>
<td>C-x i</td>
<td>Insert file</td>
</tr>
<tr>
<td>C-g</td>
<td>Cancel current command</td>
</tr>
<tr>
<td>C-z</td>
<td>Suspend/Minimize Emacs</td>
</tr>
</tbody>
</table>
<h2 id="Buffers" name="Buffers">Buffers</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>Buffers</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-x b</td>
<td>Switch to buffer</td>
</tr>
<tr>
<td>C-x 1</td>
<td>Close all other buffers</td>
</tr>
<tr>
<td>C-x 2</td>
<td>Split current buffer in tow</td>
</tr>
<tr>
<td>C-x 3</td>
<td>Split current buffer horizontally</td>
</tr>
<tr>
<td>C-x 0</td>
<td>Close current buffer</td>
</tr>
<tr>
<td>C-x o</td>
<td>Switch to other buffer</td>
</tr>
<tr>
<td>C-x C-b</td>
<td>List buffers</td>
</tr>
<tr>
<td>C-x k</td>
<td>Kill buffer</td>
</tr>
<tr>
<td>C-x ^</td>
<td>Grow window vertically; prefix is number of lines</td>
</tr>
</tbody>
</table>
<h2 id="Help" name="Help">Help</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>Help</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-h C-h</td>
<td>Help menu</td>
</tr>
<tr>
<td>C-h i</td>
<td>Info</td>
</tr>
<tr>
<td>C-h a</td>
<td>Apropos</td>
</tr>
<tr>
<td>C-h b</td>
<td>Key bindings</td>
</tr>
<tr>
<td>C-h m</td>
<td>Mode help</td>
</tr>
<tr>
<td>C-h k</td>
<td>Show command documentation; prompts for keystrokes</td>
</tr>
<tr>
<td>C-h c</td>
<td>Show command name on message line; prompts for keystrokes</td>
</tr>
<tr>
<td>C-h f</td>
<td>Describe function; prompts for command or function name, shows documentation in other window</td>
</tr>
<tr>
<td>C-h i</td>
<td>Info browser; gives access to online documentation for emacs and more</td>
</tr>
</tbody>
</table>
<h2 id="Misc" name="Misc">Misc</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>Other</th>
</tr>
</thead>
<tbody>
<tr>
<td>M-/</td>
<td>Abbreviation</td>
</tr>
<tr>
<td>M-q</td>
<td>Autoformat current text region</td>
</tr>
<tr>
<td>C-M-</td>
<td>Re-indent current region</td>
</tr>
<tr>
<td>C-x (</td>
<td>Start defining macro</td>
</tr>
<tr>
<td>C-x )</td>
<td>Stop macro defintion</td>
</tr>
<tr>
<td>C-x e</td>
<td>Execute macro</td>
</tr>
</tbody>
</table>
<h2 id="C-Mode+Commands" name="C-Mode+Commands">C-Mode Commands</h2>
<table>
<thead>
<tr>
<th>Key</th>
<th>C-Mode</th>
</tr>
</thead>
<tbody>
<tr>
<td>C-j</td>
<td>Insert a newline and indent the next line.</td>
</tr>
<tr>
<td>C-c C-q</td>
<td>Fix indentation of current function</td>
</tr>
<tr>
<td>C-c C-a</td>
<td>Toggle the auto-newline-insertion mode. (If it was off, it will now be on and vice versa.)</td>
</tr>
<tr>
<td>C-c C-d</td>
<td>Toggle the hungry delete mode</td>
</tr>
</tbody>
</table>
<h2 id="Extended+commands" name="Extended+commands">Extended commands</h2>
<p>Enter <code>ESC</code> + <code>[</code> and enter this text:</p>
<table>
<thead>
<tr>
<th>M-x</th>
<th>Commands</th>
</tr>
</thead>
<tbody>
<tr>
<td>c-set-style</td>
<td>Change the indentation style</td>
</tr>
<tr>
<td>replace-string</td>
<td>Global string replacement</td>
</tr>
<tr>
<td>revert-buffer</td>
<td>Throw out all changes and revert to the last saved version of the file.</td>
</tr>
<tr>
<td>gdb</td>
<td>Start GNU debugger</td>
</tr>
<tr>
<td>shell</td>
<td>Start shell in new buffer</td>
</tr>
<tr>
<td>print-buffer</td>
<td>Send the contents of the current buffer to the printer</td>
</tr>
<tr>
<td>compile</td>
<td>Compile a program</td>
</tr>
<tr>
<td>set-variable</td>
<td>Change the value of an Emacs variable to customize Emacs</td>
</tr>
<tr>
<td>artist-mode</td>
<td>Start artist mode</td>
</tr>
<tr>
<td>artist-mode-off</td>
<td>Exit artist mode</td>
</tr>
<tr>
<td>tabify</td>
<td>...</td>
</tr>
<tr>
<td>untabify</td>
<td>...</td>
</tr>
</tbody>
</table>
<h2 id="Tags" name="Tags">Tags</h2>
<p>Use tags to navigate source code. It's not hard to set up. This takes advantage of a popular tool called "Exuberant Ctags" (AKA ctags, or etags) that scans your source code and indexes the symbols into a <code>TAGS</code> file. Note: emacs comes with a tool called "etags" that does almost the same thing as Exuberant Ctags. In cygwin, the "etags" binary is actually Exuberant Ctags. Confused yet? My advice is, ignore Emacs etags, and use Exuberant Ctags, whatever it happens to be called in your part of the universe. To generate a <code>TAGS</code> file, do this in the root of your code tree (stick this in a script or Makefile):</p>
<pre><code>#ETAGS=/cygdrive/c/emacs-21.3/bin/etags.exe
ETAGS=etags # Exuberant ctags
rm TAGS
find . -name '*.cpp' -o -name '*.h' -o -name '*.c' -print0
| xargs $(ETAGS) --extra=+q --fields=+fksaiS --c++-kinds=+px --append
</code></pre>
<p>Then, when you're reading code and want to see the definition(s) of a symbol:</p>
<ul>
<li><code>M-.</code>: goes to the symbol definition</li>
<li><code>M-0 M-.</code>: goes to the next matching definition</li>
<li><code>M-*</code>: return to your starting point</li>
</ul>
ArchLinux tips
urn:uuid:6c83148e-383c-b9f3-1b4f-bde904d37690
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A bunch of recipes useful for an ArchLinux system environment.</p>
<p>Mostly around system administration.</p>
<h2 id="Custom+Repos+and+Packages" name="Custom+Repos+and+Packages">Custom Repos and Packages</h2>
<p>In the repo directory, put all the packages in there.</p>
<pre><code>repo-add ./custom.db.tar.gz ./*
</code></pre>
<p>Add to <code>pacman.conf</code>:</p>
<pre><code>[custom]
SigLevel = [Package|Databse]Never|Optional|Required
Server = path-to-repo
</code></pre>
<p>See also <code>repo-remove</code>. A package database is a tar file, optionally compressed. Valid extensions are <code>.db</code> or <code>.files</code> followed by an archive extension of <code>.tar</code>, <code>.tar.gz</code>, <code>.tar.bz2</code>, <code>.tar.xz</code>, or <code>.tar.Z</code>. The file does not need to exist, but all parent directories must exist. ?Can we create a <code>rpmgot.php</code> hack?</p>
<h2 id="Safe+automatic+pacman+upgrades" name="Safe+automatic+pacman+upgrades">Safe automatic pacman upgrades</h2>
<ul>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=66822">safepac</a> : This is an approach for automating pacman upgrades yet catching <em>problematic</em> updates before hand.</li>
</ul>
<h2 id="Building+packages" name="Building+packages">Building packages</h2>
<p>requires: @base-devel, abs, fakeroot</p>
<pre><code>makepkg -s
</code></pre>
<p>or</p>
<pre><code>makeworld ?
</code></pre>
<h2 id="Working+with+the+serial+console" name="Working+with+the+serial+console">Working with the serial console</h2>
<p>Configure your Arch Linux machine so you can connect to it via the serial console port (com port). This will enable you to administer the machine even if it has no keyboard, mouse, monitor, or network attached to it (a headless server).</p>
<h3 id="Configuration" name="Configuration">Configuration</h3>
<p>Add this to the bootloader kernel line:</p>
<pre><code>console=tty0 console=ttyS0,9600
</code></pre>
<p>From systemd:</p>
<pre><code>systemctl enable getty@ttyS0.service
</code></pre>
<h2 id="Installing+Arch+Linux+using+the+serial+console" name="Installing+Arch+Linux+using+the+serial+console">Installing Arch Linux using the serial console</h2>
<ol>
<li>Boot the target machine using the Arch Linux installation CD.</li>
<li>When the bootloader appears, select "Boot Arch Linux ()" and press tab to edit</li>
<li>Append console=ttyS0 and press enter</li>
<li>Systemd should now detect ttyS0 and spawn a serial getty on it, allowing you to proceed as usual</li>
</ol>
<p>Note: After setup is complete, the console settings will not be saved on the target machine; in order to avoid having to connect a keyboard and monitor, configure console access on the target machine before rebooting.</p>
<hr />
<h2 id="Identifying+files+not+owned+by+any+package" name="Identifying+files+not+owned+by+any+package">Identifying files not owned by any package</h2>
<pre><code>pacman-disowned
#!/bin/sh
tmp=${TMPDIR-/tmp}/pacman-disowned-$UID-$$
db=$tmp/db
fs=$tmp/fs
mkdir "$tmp"
trap 'rm -rf "$tmp"' EXIT
pacman -Qlq | sort -u > "$db"
find /bin /etc /sbin /usr
! -name lost+found
( -type d -printf '%p/n' -o -print ) | sort > "$fs"
comm -23 "$fs" "$db"
</code></pre>
<h2 id="Pacman+one+liners" name="Pacman+one+liners">Pacman one liners</h2>
<ul>
<li>
<p>Remove packages and its dependancies. pacman -Rs ...</p>
</li>
<li>
<p>List explicitly installed packages pacman -Qeq</p>
</li>
<li>
<p>List orphans pacman -Qtdq</p>
</li>
<li>
<p>Remove everything but base group pacman -Rs $(comm -23 <(pacman -Qeq|sort) <((for i in $(pacman -Qqg base); do pactree -ul $i; done)|sort -u|cut -d ' ' -f 1))</p>
</li>
<li>
<p>Listing changed configuraiton files pacman -Qii | awk '/^MODIFIED/ {print $2}'</p>
</li>
<li>
<p>Download a package without installing it pacman -Sw package_name</p>
</li>
<li>
<p>Manage pacman cache</p>
<p>paccache -h</p>
</li>
</ul>
PHP notes
urn:uuid:e362d2f6-48e0-fb7f-3dee-12ddeb25d079
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Notes on doing different things within the PHP language.</p>
<h2 id="Object+oriented+introspection" name="Object+oriented+introspection">Object oriented introspection</h2>
<ul>
<li>property_exists(obj,prop_name)</li>
<li>method_exists(obj,method_name)</li>
<li>is_a(obj,'clas_name') or ($obj instanceof ClassName)</li>
</ul>
<h2 id="Dynamic+coding" name="Dynamic+coding">Dynamic coding</h2>
<ul>
<li>Call a method: call_user_func(array($obj,'method',...args...)</li>
<li>You can simply $obj->prop = value to add properties.</li>
<li>or you can use __set and __get. See <a href="http://php.net/manual/en/language.oop5.overloading.php">http://php.net/manual/en/language.oop5.overloading.php</a></li>
</ul>
<h2 id="varargs" name="varargs">varargs</h2>
<ul>
<li><a href="http://php.net/manual/en/function.func-get-arg.php">func_get_arg(num)</a></li>
<li><a href="http://www.php.net/manual/en/function.func-get-args.php">func_get_args()</a></li>
<li><a href="http://www.php.net/manual/en/function.func-num-args.php">func_get_num_args()</a></li>
</ul>
Git recipes
urn:uuid:95988ead-451a-30e1-2f49-36152603ac9e
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A collection of small useful recipes for using with <code>Git</code>.</p>
<h1>Rewriting history</h1>
<h2>Rolling back the last commit</h2>
<p>if nobody has pulled your remote repo yet, you can change your branch HEAD and force push it to said remote repo:</p>
<pre><code>git reset --hard HEAD^
git push -f</code></pre>
<h1>Restoring changes</h1>
<p>So in the event that you want to go back to a previous version of a file. First you must identify the version using:</p>
<pre><code>git log $file</code></pre>
<p>Once you know which commit to go to, do:</p>
<pre><code>git checkout $hash $file</code></pre>
<p>Then</p>
<pre><code>git commit $file</code></pre>
<h1>User friendly version ids</h1>
<p>Creating version ids Use:</p>
<pre><code> git describe</code></pre>
<p>Gives:</p>
<pre><code> $tag-$commit_count-$hash</code></pre>
<p>However for this to work, you need to have a good tag set and a good tag naming convention.</p>
<h1>Branches</h1>
<p>Main branch names:</p>
<ul>
<li>master - The main branch. Source code of HEAD always reflects production-ready status.</li>
<li>develop or dev - Main dev branch. HEAD always reflects state with the latest development changes for the next release. This can sometimes be called the "integration branch" and used to generate automatic nightly builds.</li>
</ul>
<p>Also a variety of supporting branches to aid parallel development between team members, ease tracking of features, prepare for production releases and to assist in quickly fixing live production problems. Unlike the main branches, these branches always have a limited life time, since they will be removed eventually. Creating a new branch:</p>
<pre><code> git checkout -b new_branch develop
# Creates a branch called "new_branch" from "develop" and switches to it
git push -u origin new_branch
# Pushes "new_branch" to the remote repo</code></pre>
<p>Listing branches</p>
<pre><code> git branch # List all local branches
git branch -a # List local and remote branches</code></pre>
<p>Merging branches</p>
<pre><code> git checkout dev
# Switches to branch that will receive the commits...
git merge --no-ff "feature_branch"
# makes the a single commit (instead of replaying all the commits from the feature branch)</code></pre>
<p>Deleting branches</p>
<pre><code>git branch -d branch_name # Only local branches
git push origin --delete branch_name # Remote branch
git push origin :branch_name # Old format for deleting... prefix with ":"</code></pre>
<p>Clean-up delete branches in remote repo from local repo...</p>
<pre><code>git branch --delete branch
git remote prune origin</code></pre>
<h1>Tagging</h1>
<h2>Creating tags</h2>
<p>Tag releases with</p>
<pre><code>git tag -a $tagname -m "$descr"</code></pre>
<p>This creates an annotated tag that has full meta data content and it is favored by Git describe.</p>
<h2>Temporary snapshots</h2>
<pre><code>git tag $tagname</code></pre>
<p>These are lightweight tag that are associated to a specific commit.</p>
<h2>Sharing tags</h2>
<p>By default are not pushed. They need to be exported with:</p>
<pre><code>git push origin $tagname</code></pre>
<p>or</p>
<pre><code>git push origin --tags</code></pre>
<h2>To pull tags (if there aren't any)</h2>
<pre><code>git fetch --tags</code></pre>
<h2>Deleting tags</h2>
<pre><code>git tag -d $tagname # Local tags
git push --delete origin $tagname # Remote tags
git push origin :refs/tags/$tagname # Remote tags (OLD VERSION)</code></pre>
<h2>Rename a tag:</h2>
<pre><code>git tag new old
git tag -d old
git push origin :refs/tags/old</code></pre>
<h1>Setting up GIT</h1>
<pre><code>git config --global user.name "user"
git config --global user.email "email"</code></pre>
<p>Other settings:</p>
<pre><code>[http]
sslVerify = false
proxy = http://10.47.142.30:8080/
[user]
email = alejandro_liu@hotmail.com
name = alex</code></pre>
<h2>Using ~/.netrc for persistent authentication</h2>
<p>Create a file called <code>.netrc</code> in your home directory. Make sure you sets permissions <code>600</code> so that it is only readable by user. With Windows, create a file <code>_netrc</code> in your home directory. You may need to define a %HOME% environment variable. In Windows 7 you can use:</p>
<pre><code>setx HOME %USERPROFILE%</code></pre>
<p>or</p>
<pre><code>set HOME=%HOMEDRIVE%%HOMEPATH%</code></pre>
<p>The contents of <code>.netrc</code> (or <code>_netrc</code>) are as follows:</p>
<pre><code>|machine $system
| login $user
| password $pwd
|machine $system
| login $user
| password $pwd</code></pre>
<h1>Creating new repositories</h1>
<pre><code>mkdir ~/hello-world
cd ~/hello-world
git init
# Creates an empty repository in ~/hello-world
touch file
git add file
git commit -m 'first commit'
# Creates a new file and commits locally
git remote add origin 'https://$user:$passwd@github.com/$user/hello-world.git
# Creates a remote name for push/pull
git push origin master
# Send commits to remote</code></pre>
<p>Creating a bare repo:</p>
<pre><code>mkdir templ
cd templ
echo "Initial commit" &gt; README.md
git add README.md
git commit -m"Initial commit"
git clone --bare .</code></pre>
<h1>Vendor Branches</h1>
<p>Set-up</p>
<pre><code>unzip wordpress-2.3.zip
cd wordpress
# Note, unzip creates this directory...
git init
git add .
git commit -m 'Import wordpress 2.3'
git tag v2.3
git branch upstream
# Create the upstream branch used to track new vendor releases</code></pre>
<p>When a new release comes out:</p>
<pre><code>cd wordpress
git checkout upstream
rm -r \*
# Delete all files in the main directory but doesn't touch dot files (like .git)
(cd .. &amp;&amp; unzip wordpress-2.3.1.zip)
git add .
git commit -a -m 'Import wordpress 2.3.1'
git tag v2.3.1
git checkout master
git merge upstream</code></pre>
<p>A variation of vendor branches is to sync with an upstream fork in github. Read this guide on how to do that: <a href="https://help.github.com/articles/syncing-a-fork/">Syncing a fork on github</a></p>
<h1>GIT through patches</h1>
<p>Creating a patch:</p>
<pre><code> ... prepare a new branch to keep work separate ...
git checkout -b mybranch
... do work ...
git commit -a
.. create the patch from branch "master"...
git format-patch master --stdout &gt; file.patch</code></pre>
<p>To apply patch..</p>
<pre><code> ... show what the patch file will do ...
git apply --stat file.patch
.. displays issues the patch might cause...
git apply --check file.patch
.. apply with am (so you can sign-off)
git am --signoff &lt; file.patch</code></pre>
<h1>Maintenance</h1>
<pre><code>git fsck
git gc --prune=now # Clean-up
git remote prune origin # Clean-up stale references to deleted remote objects</code></pre>
<h1>Submodules</h1>
<p>Add submodules to a project:</p>
<pre><code>git submodule add $repo_url $dir</code></pre>
<p>Clone a project with submodules:</p>
<pre><code>git clone $repo_url
cd $repo
git submodule init
git submodule update</code></pre>
<p>Or in a single command (Git >1.6.5):</p>
<pre><code>git clone --recursive $repo_url</code></pre>
<p>For already cloned (Git >1.6.5):</p>
<pre><code>git clone $repo_url
cd $repo
git submodule update --init --recursive</code></pre>
<p>To keep a submodule up-to-date:</p>
<pre><code>git pull
git submodule update</code></pre>
<p>Remove sub-modules:</p>
<pre><code>git submodule deinit $submodule
git rm $submodule # No trailing slash!</code></pre>
<h2 id="setting+git+email+per+repository" name="setting+git+email+per+repository">setting git email per repository</h2>
<p>Navigate to the work repository, then at the root folder run the
following command to change the email.</p>
<pre><code>git config --local user.email name@work.com</code></pre>
<p><strong>Note:</strong> this command only affects the current repository. Any
other repositories will still use the default email specified in
<code>~/.gitconfig</code>.</p>
<p>Alternatively, you can have different configurations based on a
directory path by using:</p>
<p>Contents of <code>$HOME/.gitconfig</code></p>
<pre><code>[includeIf "gitdir:~/work/"]
path = .gitconfig-work
[includeIf "gitdir:~/personal/"]
path = .gitconfig-personal</code></pre>
Getting rid of DRM on e-books and videos
urn:uuid:63fb17bb-04ae-a619-9767-3787afe55323
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Instructions on how to remove DRM from E-Books and videos.</p>
<h2 id="How+to+Remove+DRM+from+Ebooks+%28and+Back+Up+Your+Library+Permanently%29" name="How+to+Remove+DRM+from+Ebooks+%28and+Back+Up+Your+Library+Permanently%29">How to Remove DRM from Ebooks (and Back Up Your Library Permanently)</h2>
<p>The easiest way to strip DRM from Kindle books (and Barnes and Noble, Adobe Digital Content, etc) is with the free ebook software Calibre, DRM removal plugins, and a copy of the Kindle desktop software (PC/Mac). These directions are for Kindle, but will work with Barnes and Noble, Adobe Digital Editions, and older formats. Here's what you need to do:</p>
<ol>
<li>Download Calibre, the the plugins, and the Kindle Desktop software.</li>
<li>Unzip the contents of the plugin directory.</li>
<li>Open up Calibre and click on <code>"Preferences."</code></li>
<li>Navigate to <code>"Plugins"</code> under the <code>"Advanced"</code> section.</li>
<li>Click <code>"Load Plugin from file,"</code> and select <code>K3MobiDeDRM_v04.5_plugin.zip</code> from the directory you just unzipped.</li>
<li>Load up the Kindle app on your Mac or Windows computer and download all your books from Amazon.</li>
<li>Navigate to either <code>C:Users[your username]DocumentsMy Kindle Content</code> on Windows or <code>[your username]My DocumentsMy Kindle Content</code> on Mac.</li>
<li>Your books aren't named in any meaningful way, so just drag all the <code>*.azw</code> files into Calibre.</li>
<li>After a short wait (depending on the size of your library), Calibre will finish importing the books. Now you have a DRM-free backup of all your books on your computer.</li>
</ol>
<p>It's a little convoluted, but once you get the hang of it, Calibre is a solid way to backup all your purchased ebooks.</p>
<h2 id="How+to+Remove+DRM+from+Movies+and+TV+Shows" name="How+to+Remove+DRM+from+Movies+and+TV+Shows">How to Remove DRM from Movies and TV Shows</h2>
<p>You can record directly from your computer using a screen recording tool (any of these <a href="http://lifehacker.com/5839047/five-best-screencasting-or-screen-recording-tools">five</a> will do). You will, of course, have to wait for the entire movie since it operates essentially like dubbing, but if you already use screen recording tools it's a free option for backing up your movies.</p>
Wordpress links
urn:uuid:9fdd6576-1d2e-dac8-71d5-148fbba2d0db
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article describes how you creeate hyperlinks within
Wordpress. There are a number of ways to do this, depending
on the configuration and the types of data we are linking to.</p>
<h2 id="Linking+Without+Using+Permalinks" name="Linking+Without+Using+Permalinks">Linking Without Using Permalinks</h2>
<p>This actually works whether or not Permalinks are active. Using the numeric values found in the ID column of the Posts, Categories, and Pages Administration, you can create links as follows.</p>
<h3 id="Posts" name="Posts">Posts</h3>
<p>To link to a Post, find the ID of the target post on the Posts administration panel, and insert it in place of the '123' in this link:</p>
<pre><code><a href="index.php?p=123">Post Title</a>
</code></pre>
<h3 id="Categories" name="Categories">Categories</h3>
<p>To link to a Category, find the ID of the target Category on the Categories administration panel, and insert it in place of the '7' in this link:</p>
<pre><code><a href="index.php?cat=7">Category Title</a>
</code></pre>
<h3 id="Pages" name="Pages">Pages</h3>
<p>To link to a Page, find the ID of the target Page on the Pages administration panel, and insert it in place of the '42' in this link:</p>
<pre><code><a href="index.php?page_id=42">Page title</a>
</code></pre>
<h3 id="Date-based+Archives" name="Date-based+Archives">Date-based Archives</h3>
<pre><code>Year: <a href="index.php?m=2006">2006</a>
Month: <a href="index.php?m=200601">Jan 2006</a>
Day: <a href="index.php?m=20060101">Jan 1, 2006</a>
</code></pre>
<h2 id="Linking+Using+Permalinks" name="Linking+Using+Permalinks">Linking Using Permalinks</h2>
<p>If you have enabled permalinks, you have a few additional options for providing links that readers of your site will find a bit more user-friendly than the cryptic numbers. For posts, replace each Structure Tag in your permalink structure with the data appropriate to a post to construct a URL for that post. For example, if the permalink structure is:</p>
<pre><code>/index.php/archives/%year%/%monthnum%/%day%/%postname%/
</code></pre>
<p>Replacing the Structure Tags with appropriate values may produce a URL that looks like this:</p>
<pre><code><a href="/index.php/archives/2005/04/22/my-sample-post/">My Sample Post</a>
</code></pre>
<p>To obtain an accurate URL for a post it may be easier to navigate to the post within the WordPress blog and then copy the URL from one of the blog links that WordPress generates. Review the information at Using Permalinks for more details on constructing URLs for individual posts.</p>
<h3 id="Categories" name="Categories">Categories</h3>
<p>To produce a link to a Category using permalinks, obtain the Category Base value from the Options > Permalinks Administration Panel, and append the category name to the end. For example, to link to the category "testing" when the Category Base is "/index.php/categories", use the following link:</p>
<pre><code><a href="/index.php/categories/testing/">category link</a>
</code></pre>
<p>You can specify a link to a subcategory by using the subcategory directly (as above), or by specifying all parent categories before the category in the URL, like this:</p>
<pre><code><a href="/index.php/categories/parent_category/sub_category/">subcategory link</a>
</code></pre>
<h3 id="Pages" name="Pages">Pages</h3>
<p>Pages have a hierarchy like Categories, and can have parents. If a Page is at the root level of the hierarchy, you can specify just the Page's "page slug" after the static part of your permalink structure:</p>
<pre><code><a href="/index.php/a-test-page">a test page</a>
</code></pre>
<p>Once again, the best way to verify that this is the correct URL is to navigate to the target Page on the blog and compare the URL to the one you want to use in the link.</p>
<h3 id="Date-based+Archives" name="Date-based+Archives">Date-based Archives</h3>
<pre><code>Year: <a href="/index.php/archives/2006">2006</a>
Month: <a href="/index.php/archives/2006/01/">Jan 2006</a>
Day: <a href="/index.php/archives/2006/01/01/">Jan 1, 2006</a>
</code></pre>
International Phonetic Alphabet
urn:uuid:77dc18ab-0317-73e7-6205-6609a3af7da4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Phonetic Alphabet</p>
<table>
<thead>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td>Alpha</td>
<td>Kilo</td>
<td>Uniform</td>
<td>0 - Zero</td>
</tr>
<tr>
<td>Bravo</td>
<td>Lima</td>
<td>Victor</td>
<td>1 - Wun</td>
</tr>
<tr>
<td>Charlie</td>
<td>Mike</td>
<td>Whiskey</td>
<td>2 - Two</td>
</tr>
<tr>
<td>Delta</td>
<td>November</td>
<td>X-Ray</td>
<td>3 - Tree</td>
</tr>
<tr>
<td>Echo</td>
<td>Oscar</td>
<td>Yankee</td>
<td>4 - Fower</td>
</tr>
<tr>
<td>Foxtrot</td>
<td>Papa</td>
<td>Zulu</td>
<td>5 - Fife</td>
</tr>
<tr>
<td>Golf</td>
<td>Quebec</td>
<td>. decimal</td>
<td>6 - Six</td>
</tr>
<tr>
<td>Hotel</td>
<td>Romeo</td>
<td>(point)</td>
<td>7 -Seven</td>
</tr>
<tr>
<td>India</td>
<td>Sierra</td>
<td>. (full)</td>
<td>8 - Ait</td>
</tr>
<tr>
<td>Juliet</td>
<td>Tango</td>
<td>stop</td>
<td>9 - Niner</td>
</tr>
</tbody>
</table>
Makefiles
urn:uuid:ab813dd4-bcac-da32-253e-0787855c1037
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Some notes on GNU Make. I always have to look-up these in
the manual. Here now for my own convenience.</p>
<h2 id="GNU+Make+automatic+variables%3A" name="GNU+Make+automatic+variables%3A">GNU Make automatic variables:</h2>
<p>From <a href="http://www.gnu.org/software/make/manual/html_node/Automatic-Variables.html">http://www.gnu.org/software/make/manual/html_node/Automatic-Variables.html</a>.</p>
<ul>
<li>$@<br />
The file name of the target of the rule.</li>
<li>$%<br />
The target member name</li>
<li>$<<br />
The name of the first prerequisite.</li>
<li>$?<br />
The names of all the prerequisites that are newer.</li>
<li>$^<br />
The names of all the prerequisites.</li>
</ul>
<p>To include files in Makefile only if they exist:</p>
<pre><code>ifneq ($(wildcard _incfile_),)
include _incfile_
endif
</code></pre>
Issue Tracker
urn:uuid:d8490636-a173-2c29-dc73-38fb2ecc8e64
2024-03-05T00:00:00+01:00
Alejandro Liu
<ul>
<li>Use DVCS as backend (GIT)</li>
<li>Output html</li>
<li>markdown</li>
<li>Prefer perl/python</li>
<li>Mostly RO so to avoid merge conflicts.</li>
</ul>
<h3 id="DITZ+%2B+git+integration" name="DITZ+%2B+git+integration">DITZ + git integration</h3>
<p>Adding Markdown</p>
<ul>
<li><code>lib/html.rb</code> contains the functions that generate HTML</li>
<li>*.rhtml contain templates and call functions in <code>lib/html.rb</code> to generate (and format) output.</li>
</ul>
<p>Note, if working with <code>github</code>, refer to <a href="http://github.github.com/github-flavored-markdown/">http://github.github.com/github-flavored-markdown/</a> Using a markdown library:</p>
<ul>
<li><a href="http://maruku.rubyforge.org/usage.html">http://maruku.rubyforge.org/usage.html</a><br />
Pure Ruby</li>
<li><a href="http://kramdown.rubyforge.org/">http://kramdown.rubyforge.org/</a><br />
Pure Ruby (fast?)</li>
<li><a href="https://github.com/rtomayko/rdiscount">https://github.com/rtomayko/rdiscount</a><br />
C library</li>
<li><a href="http://ruby.morphball.net/bluefeather/index_en.html">http://ruby.morphball.net/bluefeather/index_en.html</a><br />
Pure Ruby?</li>
</ul>
<p>Arch has: redcarpet, rdiscount, maruku, ruby-markdown, github-markdown</p>
<h2 id="GIT+INTEGRATION" name="GIT+INTEGRATION">GIT INTEGRATION</h2>
<h3 id="Simple+hooks" name="Simple+hooks">Simple hooks</h3>
<h4 id="%7E%2F.ditz%2Fhooks%2Fafter_add.rb%3A" name="%7E%2F.ditz%2Fhooks%2Fafter_add.rb%3A">~/.ditz/hooks/after_add.rb:</h4>
<pre><code>Ditz::HookManager.on :after_add do |project, config, issues|
issues.each do |issue|
`git add #{issue.pathname}`
end
end
</code></pre>
<h4 id="%7E%2F.ditz%2Fhooks%2Fafter_delete.rb%3A" name="%7E%2F.ditz%2Fhooks%2Fafter_delete.rb%3A">~/.ditz/hooks/after_delete.rb:</h4>
<pre><code>Ditz::HookManager.on :after_delete do |project, config, issues|
issues.each do |issue|
`git rm #{issue.pathname}`
end
end
</code></pre>
<h3 id="GIT+Extensions%3A" name="GIT+Extensions%3A">GIT Extensions:</h3>
<p><a href="https://github.com/ihrke/git-ditz">https://github.com/ihrke/git-ditz</a> -
Adds a "ditz" subcommand to git. See README on how it installs.</p>
<h3 id="DITZ+PLUGINS%3A" name="DITZ+PLUGINS%3A">DITZ PLUGINS:</h3>
<h4 id="git-sync" name="git-sync">git-sync</h4>
<p>This plugin is useful for when you want synchronized, non-distributed issue<br />
coordination with other developers, and you're using git. It allows you to<br />
synchronize issue updates with other developers by using the 'ditz sync`<br />
command, which does all the git work of sending and receiving issue change<br />
for you. However, you have to set things up in a very specific way for this<br />
to work:</p>
<ol>
<li>Your ditz state must be on a separate branch. I recommend calling it<br />
<code>bugs</code>. Create this branch, do a ditz init, and push it to the remote<br />
repo. (This means you won't be able to mingle issue change and code<br />
change in the same commits. If you care.)</li>
<li>Make a checkout of the bugs branch in a separate directory, but NOT in<br />
your code checkout. If you're developing in a directory called "project",<br />
I recommend making a ../project-bugs/ directory, cloning the repo there<br />
as well, and keeping that directory checked out to the 'bugs' branch.<br />
(There are various complicated things you can do to make that directory<br />
share git objects with your code directory, but I wouldn't bother unless<br />
you really care about disk space. Just make it an independent clone.)</li>
<li>Set that directory as your issue-dir in your .ditz-config file in your<br />
code checkout directory. (This file should be in .gitignore, btw.)</li>
<li>Run 'ditz reconfigure' and fill in the local branch name, remote<br />
branch name, and remote repo for the issue tracking branch.</li>
</ol>
<p>Once that's set up, 'ditz sync' will change to the bugs checkout dir, bundle<br />
up any changes you've made to issue status, push them to the remote repo,<br />
and pull any new changes in too. All ditz commands will read from your bugs<br />
directory, so you should be able to use ditz without caring about where<br />
things are anymore. This complicated setup is necessary to avoid accidentally mingling code<br />
change and issue change. With this setup, issue change is synchronized,<br />
but how you synchronize code is still up to you. Usage:</p>
<ol>
<li>read all the above text very carefully</li>
<li>add a line "- git-sync" to the .ditz-plugins file in the project<br />
root</li>
<li>run 'ditz reconfigure' and answer its questions</li>
<li>run <code>ditz sync</code> with abandon</li>
</ol>
<h4 id="git+ditz+plugin" name="git+ditz+plugin">git ditz plugin</h4>
<p>This plugin allows issues to be associated with git commits and git<br />
branches. Git commits can be easily tagged with a ditz issue with the 'ditz<br />
commit' command, and both 'ditz show' and the ditz HTML output will then<br />
contain a list of associated commits for each issue. Issues can also be
assigned a single git feature branch. In this case, all<br />
commits on that branch will listed as commits for that issue. This<br />
particular feature is fairly rudimentary, however---|it assumes the reference<br />
point is the 'master' branch, and once the feature branch is merged back<br />
into master, the list of commits disappears. Two configuration variables are
added, which, when specified, are used to<br />
construct HTML links for the git commit id and branch names in the generated<br />
HTML output. Commands added:</p>
<ul>
<li>ditz set-branch: set the git branch of an issue</li>
<li>ditz commit: run git-commit, and insert the issue id into the commit<br />
message.</li>
</ul>
<p>Usage:</p>
<ol>
<li>add a line "- git" to the .ditz-plugins file in the project root</li>
<li>run ditz reconfigure, and enter the URL prefixes, if any, from<br />
which to create commit and branch links.</li>
<li>use 'ditz commit' with abandon.</li>
</ol>
<h4 id="COLLABORATION+PLUGINS" name="COLLABORATION+PLUGINS">COLLABORATION PLUGINS</h4>
<h5 id="issue-claiming" name="issue-claiming">issue-claiming</h5>
<p>This plugin allows people to claim issues. This is useful for avoiding<br />
duplication of work---|you can check to see if someone's claimed an<br />
issue before starting to work on it, and you can let people know what<br />
you're working on. Commands added:</p>
<ul>
<li>ditz claim: claim an issue for yourself or a dev specified in project.yaml</li>
<li>ditz unclaim: unclaim a claimed issue</li>
<li>ditz mine: show all issues claimed by you</li>
<li>ditz claimed: show all claimed issues, by developer</li>
<li>ditz unclaimed: show all unclaimed issues</li>
</ul>
<p>Usage:</p>
<ol>
<li>add a line "- issue-claiming" to the .ditz-plugins file in the project<br />
root</li>
<li>(optional:) add a 'devs' key to project.yaml, e.g:</li>
</ol>
<h5 id="issue+labeling" name="issue+labeling">issue labeling</h5>
<p>This plugin allows label issues. This can replace the issue component<br />
and/or issue types (bug,feature,task), by providing a more flexible<br />
to organize your issues. Commands added:</p>
<ul>
<li>ditz new_label [label]: create a new label for the project</li>
<li>ditz label : label an issue with some labels</li>
<li>ditz unlabel [labels]: remove some label(s) of an issue</li>
<li>ditz labeled [release]: show all issues with these labels</li>
</ul>
<p>Usage:</p>
<ol>
<li>add a line "- issue-labeling" to the .ditz-plugins file in the project<br />
root</li>
<li>use the above commands to abandon</li>
</ol>
<p>TODO:</p>
<ul>
<li>extend the HTML view to have per-labels listings</li>
<li>allow for more compact way to type them (completion, prefixes...)</li>
</ul>
<h5 id="issue+priority" name="issue+priority">issue priority</h5>
<p>This plugin allows issues to have priorities. Priorities are numbers<br />
P1-P5 where P1 is the highest priority and P5 is the lowest. Internally<br />
the priorities are sorted lexicographically. Commands added:</p>
<ul>
<li>ditz set-priority : Set the priority of an issue</li>
</ul>
<p>Usage:</p>
<ol>
<li>add a line "- issue-priority" to the .ditz-plugins file in the project<br />
root</li>
<li>use the above commands to abandon</li>
</ol>
Cleaning up Google Calendar
urn:uuid:e9fc8815-7d4a-6fbe-e9ac-40baabc95536
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Recipe for cleaning a google calendar.</p>
<ol>
<li>Sign in to Google Calendar</li>
<li>Click on Calendar Settings (current version has this just above the list of personal calendars, under an arrow).</li>
<li>Click on "Delete" of the main calendar.</li>
<li>A confirmation dialog box appears telling you that that "This deletes all events on primary Calendar".</li>
</ol>
Automatically adding systems to an AD domain
urn:uuid:eb2b1211-f195-a202-3479-387b1fcf4308
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>When using virtualisation it is very common to create <em>template</em> VMs
that can be cloned from. This makes deployment much easier than having
to install a new VM from scratch. Unfortunately, the cloned VMs lack
any Active Directory memberships and the VMs have to be <em>manually</em>
added to the AD domain. For automated deployment scenarios this is
less than desirable. This recipe intends to solve that issue in a
<em>Hypervisor</em> independant manner. This recipe uses a Visual Basic
script that will automatically join a system to a domain during
Windows system preparation. In Lab Manager these steps can be
performed on a VM Template so that virtual machines cloned from it
will be joined to the domain when the system customization process
runs. A specific Active Directory Organizational Unit can be
specified. The Visual Basic script will contain credentials used for
joining the system to the domain. So, as a security measure the Visual
Basic script is setup to be deleted at the end of a successful
execution.</p>
<h3 id="Prerequisites" name="Prerequisites">Prerequisites</h3>
<ul>
<li>Active Directory User Account with permissions to add Computer Objects.</li>
<li>LDAP path syntax to Active Directory Organizational Unit to add the Computer to.</li>
</ul>
<h3 id="Steps+on+the+VM+Template" name="Steps+on+the+VM+Template">Steps on the VM Template</h3>
<h4 id="Create+Scripts+Folder" name="Create+Scripts+Folder">Create Scripts Folder</h4>
<p><code>C:\Windows\Setup\Scripts</code></p>
<h4 id="Create+Batch+File" name="Create+Batch+File">Create Batch File</h4>
<p><code>C:\Windows\Setup\SetupComplete.cmd</code></p>
<pre><code>Start /wait cscript %WINDIR%\Setup\Scripts\AddDomain.vbs
Del %WINDIR%\Setup\Scripts\AddDomain.vbs
</code></pre>
<h4 id="Create+VBS+File" name="Create+VBS+File">Create VBS File</h4>
<p><code>C:\Windows\Setup\Scripts\AddDomain.vbs</code></p>
<pre><code class="language-vbs">Const JOIN_DOMAIN = 1
Const ACCT_CREATE = 2
Const ACCT_DELETE = 4
Const WIN9X_UPGRADE = 16
Const DOMAIN_JOIN_IF_JOINED = 32
Const JOIN_UNSECURE = 64
Const MACHINE_PASSWORD_PASSED = 128
Const DEFERRED_SPN_SET = 256
Const INSTALL_INVOCATION = 262144
strDomain = "DomainName"
strOU = "LDAP\OU\PATH"
strUser = "Domain\Username"
strPassword = "Password"
Set objNetwork = CreateObject("WScript.Network")
strComputer = objNetwork.ComputerName
Set objComputer = _
GetObject("winmgmts:{impersonationLevel=Impersonate}!" & _
strComputer & "rootcimv2:Win32_ComputerSystem.Name='" _
& strComputer & "'")
ReturnValue = objComputer.JoinDomainOrWorkGroup(strDomain, _
strPassword, _
strDomain & "\" & strUser, _
strOU, _
JOIN_DOMAIN + ACCT_CREATE)
</code></pre>
<p>Tip: Start Notepad as administrator to have save access to the folder.
Set the correct values for <code>StrDomain</code>, <code>StrOU</code>, <code>StrUser</code> and <code>StrPassword</code>
Example:</p>
<pre><code> strDomain = "best.adinternal.com"
strOU = "ou=Virtuals,ou=CRE R&D,ou=Beaverton,ou=Shared Management,dc=best,dc=adinternal,dc=com"
strUser& = "_adjoinuser"
strPassword = "$uperS3curePassw()rd!{13245}"
</code></pre>
<h3 id="Deploy+VM" name="Deploy+VM">Deploy VM</h3>
<p>Be sure <code>Perform customization</code> is checked and <code>Microsoft Sysprep</code> is
selected on the VM Template properties. <code>Clone the VM Template</code></p>
<p>Tip: Wait around 10 minutes before trying to login to the VM. During
this time the VM is going through the sysprep process which will change
the hostname to the name specified when cloning the VM to a
configuration and join the domain. The process should be complete when
the login screen displays <strong>[Ctrl]+[Alt]+[Delete]</strong> and prompts
for a domain login.</p>
<h3 id="Additional+notes" name="Additional+notes">Additional notes</h3>
<p>To improve security we could for example not hardcode login credentials
in the VB script. Instead, we could retrieve them from a web server
(using SSL). This server could reset the Login password for the
addDomain account and send that. Once this is completed, the password
could be reset again. Also, the web server could check the IP address
and referencing DNS/DHCP to see if this machine is indeed being
authorised. Finally, we can place this in a different AD domain (with
the appropriate trust relationships) so that you can apply additional
security policies.</p>
<h3 id="References" name="References">References</h3>
<ul>
<li>This recipe was originally published at: <a href="http://www.bonusbits.com/main/HowTo:Setup_a_VM_to_Automatically_Join_to_a_Domain">http://www.bonusbits.com/main/HowTo:Setup_a_VM_to_Automatically_Join_to_a_Domain</a><br />
Unfortunately I am no longer able to reach this site.</li>
<li><a href="http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007491">http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007491</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa392154%28v=vs.85%29.aspx">http://msdn.microsoft.com/en-us/library/windows/desktop/aa392154%28v=vs.85%29.aspx</a></li>
</ul>
<p>Other examples:</p>
<ul>
<li><a href="http://mail-archives.apache.org/mod_mbox/incubator-vcl-user/201112.mbox/%3CCAD7o_XxD2a9j0+V7faE1nTSt-OMT+W9=y4TCvKZ-3q92+czewQ@mail.gmail.com%3E">http://mail-archives.apache.org/mod_mbox/incubator-vcl-user/201112.mbox/%3CCAD7o_XxD2a9j0+V7faE1nTSt-OMT+W9=y4TCvKZ-3q92+czewQ@mail.gmail.com%3E</a></li>
<li><a href="http://www.virtualizationteam.com/virtualization-vmware/vcloud-director/vcloud-director-joining-vms-to-specific-active-directory-domain-ou.html">http://www.virtualizationteam.com/virtualization-vmware/vcloud-director/vcloud-director-joining-vms-to-specific-active-directory-domain-ou.html</a></li>
<li><a href="http://itnervecenter.com/content/deploying-window-server-2008-r2-vmware-template-and-joining-it-domain">http://itnervecenter.com/content/deploying-window-server-2008-r2-vmware-template-and-joining-it-domain</a></li>
<li><a href="http://blogs.citrix.com/2011/09/16/xenclient-auto-join-vms-to-the-activedirectory/">http://blogs.citrix.com/2011/09/16/xenclient-auto-join-vms-to-the-activedirectory/</a></li>
</ul>
SATA/IDE warm plug/unplug
urn:uuid:71b2ee04-a899-266e-0f08-d44946243471
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is for SATA and IDE interfaces that do not automatically detect added/removed devices.</p>
<h3 id="Scanning+for+newly+added+discs%3A" name="Scanning+for+newly+added+discs%3A">Scanning for newly added discs:</h3>
<pre><code>echo "- - -" > /sys/class/scsi_host/host0/scan
</code></pre>
<h3 id="safely+removing+a+disk" name="safely+removing+a+disk">safely removing a disk</h3>
<pre><code>echo 1 > /sys/block/sda/device/delete
</code></pre>
<h3 id="Other+notes..." name="Other+notes...">Other notes...</h3>
<p>In the <em>HP MicroServer</em>, we can identify the host to scan by:</p>
<pre><code>head -1 /sys/class/scsi_host/host*/proc_name
</code></pre>
<p>And look for the <code>pata_atiixp</code>. Should show <code>ahci</code> and <code>usb-storage</code> too.</p>
Icons
urn:uuid:5cfe4dbc-0acc-1959-e2d0-f16b58afcc56
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Finding icons:</p>
<p><a href="http://www.iconfinder.com/">iconfinder</a>.</p>
<p>In 2023 I am using:</p>
<p><a href="https://icons8.com/">icons8</a></p>
<p>This allows to download icons in different formats, and also allows
you to do tweaks such as:</p>
<ul>
<li>changing colors</li>
<li>adding sub-icons</li>
<li>resizing</li>
<li>etc...</li>
</ul>
On-line Web Authoring Resources
urn:uuid:4601fe64-1921-cc15-4328-f7f7280bccd4
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>A collection of links for Web-Authoring. This focuses on using
Web (HTML and CSS) technologies directly and not through a
CMS like Wordpress.</p>
<ul>
<li><a href="http://www.webestools.com/">http://www.webestools.com/</a><br />
Free online tools, generators, services, scripts, tutorials.</li>
<li><a href="http://www.google.com/webmasters/tools">http://www.google.com/webmasters/tools</a><br />
Google Webmaster tools, gives you a peek of how your website<br />
looks from google search engines perspective.</li>
<li>Free templates</li>
<li><a href="http://www.freecsstemplates.org/">http://www.freecsstemplates.org/</a></li>
<li><a href="http://csscreme.com/">http://csscreme.com/</a></li>
<li><a href="http://www.templatemo.com/">http://www.templatemo.com/</a> (Unrestricted)</li>
<li><a href="http://www.free-css.com/">http://www.free-css.com/</a> (GPL or CC)</li>
<li><a href="http://www.free-css-templates.com/">http://www.free-css-templates.com/</a> (CC)</li>
</ul>
First steps...
urn:uuid:45f493a0-345d-4ded-457f-ecb4202cc417
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>So finally took the time to re-launch the <code>0ink</code> web site. This time used more off-the shelf software. So this site is just a another plain <a href="http://wordpress.org">wordpress powered</a> site. Actually I have to thank my son for <em>introducing</em> me to <strong>wordpress.</strong> What happened is that my son, who is only seven wanted to have his own<br />
web site. (Due to peer pressure, kids these days...) He has an Android tablet that he uses quite often. Since I knew that <strong>wordpress</strong> can be used to make decent looking web sites and there even was an Android app. Also knew that free **wordpress* hosting sites can easily be found... Make a story short, I set him up with a <a href="http://wordpress.com/">http://wordpress.com/</a> account and he was live on the 'Net in a matter of minutes. His website can be found <a href="http://sebitoliu.wordpress.com/">here</a>. This first foray got me intrigued, so I tested it on another free hosting <a href="http://s12.pw/">site (here)</a> and found it quite powerful so decided to use it for <code>0ink.net</code> which seriously needed to move to a new host. The old hosting service <a href="http://www.110mb.com/">110mb</a> had been taken over by a <strong>new management team</strong> and the new free hosting service was not as appealing as before. Add in a little bit of bit-rot and that site quickly became an ugly mess. So now we are back again, and hopefully will be more maintainable.</p>
We are back...
urn:uuid:9f05bff4-7311-e1b5-0a64-091df5560c73
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>After a long time in <strong>Limbo</strong> we are back with <code>0ink.net</code> as a live
site.</p>
Local Perl packages
urn:uuid:34f9f5be-68c0-071e-05c2-82f21a16f5ea
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Determine what is the local PERL5LIB configuration:</p>
<pre><code>LIB=$(
for d in `tr : ' ' <<<$PERL5LIB
do
if [ -w $d ] ; then
echo $d
break
fi
done)
PREFIX=`dirname $LIB`
</code></pre>
<p>Install sequence</p>
<pre><code>#PREFIX=$HOME/cpan
#LIB=$HOME/cpan/lib
tar zxvf $perl-mod-tar
cd $unpacked-src-dir
perl Makefile.PL PREFIX=$PREFIX LIB=$LIB "$@"
make
make test
make install
</code></pre>
Linux Keyboard Tips
urn:uuid:1e9143b2-480d-e7d7-ad1c-e0ea552275ac
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>Miscellaneous hacks to use the keyboard under Linux.</p>
<h3 id="Special+Characters+on+X11" name="Special+Characters+on+X11">Special Characters on X11</h3>
<p>The compose key, when pressed in sequence with other keys, produces a
Unicode character. E.g., in most configurations pressing <code>Compose</code> <code>e</code>
``` produces é. Compose keys appeared on some computer keyboards
decades ago, especially those produced by Sun Microsystems. However,
it can be enabled on any keyboard with setxkbmap. For example, compose
can be set to right alt by running:</p>
<pre><code>setxkbmap -option compose:ralt</code></pre>
<h3 id="Compose+Sequences" name="Compose+Sequences">Compose Sequences</h3>
<pre><code> | no-break space &brvbar; broken bar ||
&shy; soft hyphen -- &micro; micro sign /U
&iexcl; inverted ! !! &iquest; inverted ? ??
&cent; cent sign C/ or C| &pound; pound sign L- or L=
&curren; currency sign XO or X0 &yen; yen sign Y- or Y=
&sect; section sign SO or S! or S0 &para; pilcrow sign P!
&uml; diaeresis &quot;&quot; or &quot; &macr; macron _^ or -^
&acute; acute accent '' &cedil; cedilla ,,
&copy; copyright sign CO or C0 &reg; registered sign RO
&ordf; feminine ordinal A_ &ordm; masculine ordinal O_
&laquo; opening angle brackets &amp;lt;&amp;lt; &raquo; closing angle brakets &amp;gt;&amp;gt;
&deg; degree sign 0^ &sup1; superscript 1 1^
&sup2; superscript 2 2^ &sup3; superscript 3 3^
&plusmn; plus or minus sign +- &frac14; fraction one-quarter 14
&frac12; fraction one-half 12 &frac34; fraction three-quarter 34
&middot; middle dot .^ or .. &not; not sign -,
&times; multiplication sign xx &divide; division sign :-
&Agrave; A grave A` &agrave; a grave a`
&Aacute; A acute A' &aacute; a acute a'
A circumflex A^ &acirc; a circumflex a^
&Atilde; A tilde A~ &atilde; a tilde a~
&Auml; A diaeresis A&quot; &auml; a diaeresis a&quot;
&Aring; A ring A* &aring; a ring a*
&AElig; AE ligature AE &aelig; ae ligature ae
&Ccedil; C cedilla C, &ccedil; c cedilla c,
&Egrave; E grave E` &amp;egrave; e grave e`
&Eacute; E acute E' &amp;eacute; e acute e'
&Ecirc; E circumflex E^ &ecirc; e circumflex e^
&Euml; E diaeresis E&quot; &amp;euml; e diaeresis e&quot;
&Igrave; I grave I` &igrave; i grave i`
&Iacute; I acute I' &iacute; i acute i'
&Icirc; I circumflex I^ &icirc; i circumflex i^
&Iuml; I diaeresis I&quot; &iuml; i diaeresis i&quot;
&ETH; capital eth D- &eth; small eth d-
&Ntilde; N tilde N~ &ntilde; n tilde n~
&Ograve; O grave O` &ograve; o grave o`
&Oacute; O acute O' &oacute; o acute o'
&Ocirc; O circumflex O^ &ocirc; o circumflex o^
&Otilde; O tilde O~ &otilde; o tilde o~
&Ouml; O diaeresis O&quot; &ouml; o diaeresis o&quot;
&Oslash; O slash O/ &oslash; o slash o/
&Ugrave; U grave U` &ugrave; u grave u`
&Uacute; U acute U' &uacute; u acute u'
&Ucirc; U circumflex U^ &ucirc; u circumflex u^
&Uuml; U diaeresis U&quot; &uuml; u diaeresis u&quot;
&Yacute; Y acute Y' &yacute; y acute y'
&THORN; capital thorn TH &thorn; small thorn th
&szlig; German small sharp s ss &yuml; y diaeresis y&quot;
Euro e=</code></pre>
<h3 id="Environment+variables" name="Environment+variables">Environment variables</h3>
<p>Some unfriendly applications (including many GTK apps) will override
the compose key and default to their own built-in combinations. You
can typically fix this by setting environment variables; for instance,
you can fix the behavior for GTK with:</p>
<pre><code>export GTK_IM_MODULE=xim</code></pre>
Bash Tips
urn:uuid:7c90e003-49d7-efc5-b355-064546463b89
2024-03-05T00:00:00+01:00
alex
<p>Some bash one-liners:</p>
<pre><code>echo ${!X*}
</code></pre>
<p>Will print all the names of variables whos name starts with <code>X</code>. To
output the contents of a variable so it can be parsed by<br />
bash</p>
<pre><code>declare -p VARNAME
</code></pre>
<h2 id="Pattern+Matching" name="Pattern+Matching">Pattern Matching</h2>
<pre><code>Operator: ${foo#t*is}
</code></pre>
<p>Function: deletes the shortest possible match from the left</p>
<pre><code>Operator: ${foo##t*is}
</code></pre>
<p>Function: deletes the longest possible match from the left</p>
<pre><code>Operator: ${foo%t*st}
</code></pre>
<p>Function: deletes the shortest possible match from the right</p>
<pre><code>Operator: ${foo%%t*st}
</code></pre>
<p>Function: deletes the longest possible match from the right MNEMONIC:
The # key is on the left side of the $ key and operates from the left.
The % key is on the right of the $ key and operates from the right.</p>
<h2 id="Substitution" name="Substitution">Substitution</h2>
<pre><code>Operator: ${foo:-bar}
</code></pre>
<p>Function: If $foo exists and is not null, return $foo. If it doesn't
exist or is null, return bar.</p>
<pre><code>Operator: ${foo:=bar}
</code></pre>
<p>Function: If $foo exists and is not null, return $foo. If it doesn't
exist or is null, set $foo to bar and return bar.</p>
<pre><code>Operator: ${foo:+bar}
</code></pre>
<p>Function: If $foo exists and is not null, return bar. If it doesn't
exist or is null, return a null.</p>
<pre><code>Operator: ${foo:?"error message"}
</code></pre>
<p>Function: If $foo exists and isn't null, return its value. If it
doesn't exist or is null, print the error message. If no error message
is given, it prints parameter null or not set. In a non-interactive
shell, this aborts the current script. In an interactive shell, this
simply prints the error message.</p>
<h2 id="%24%24+for+Subshell" name="%24%24+for+Subshell">$$ for Subshell</h2>
<p>When running a sub-shell in <code>bash</code> the <code>$$</code> construct still returns
the process id of the main shell. Use the following construct to
determine the correct IP address:</p>
<pre><code>mypid=$(sh -c 'echo $$PPID')
</code></pre>
<p>Yes, it looks <em>nasty</em>.</p>
<h2 id="Retrieving+an+IP+address." name="Retrieving+an+IP+address.">Retrieving an IP address.</h2>
<p><em>Updated 2023-10-20</em></p>
<p>To get the IP address, the easiest way is to use:</p>
<pre><code class="language-bash">ip -br a</code></pre>
<p>Results in:</p>
<pre><code>lo UNKNOWN 127.0.0.1/8 ::1/128
enp1s0 DOWN
eno1 UP 192.168.101.64/24 fd42:bf4e:715f:6ef5:95b0:ecc9:68fa:ac07/64 fe80::82db:4389:522c:332d/64
wlp3s0 DOWN
virbr0 DOWN 192.168.122.1/24
docker0 UP 172.17.0.1/16 fe80::42:95ff:fe8e:29b0/64
br-6f79780fe7d1 DOWN 172.18.0.1/16
veth8758d36@if8 UP fe80::a84f:c3ff:fe8f:45b/64 </code></pre>
<p>Which is easy to parse. Another option is:</p>
<pre><code class="language-bash">ip -o a</code></pre>
<p>Results:</p>
<pre><code class="language-text">1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host proto kernel_lo \ valid_lft forever preferred_lft forever
3: eno1 inet 192.168.101.64/24 brd 192.168.101.255 scope global dynamic noprefixroute eno1\ valid_lft 26700sec preferred_lft 26700sec
3: eno1 inet6 fd42:bf4e:715f:6ef5:95b0:ecc9:68fa:ac07/64 scope global dynamic noprefixroute \ valid_lft 1534sec preferred_lft 1534sec
3: eno1 inet6 fe80::82db:4389:522c:332d/64 scope link noprefixroute \ valid_lft forever preferred_lft forever
5: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\ valid_lft forever preferred_lft forever
6: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
6: docker0 inet6 fe80::42:95ff:fe8e:29b0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever
7: br-6f79780fe7d1 inet 172.18.0.1/16 brd 172.18.255.255 scope global br-6f79780fe7d1\ valid_lft forever preferred_lft forever
9: veth8758d36 inet6 fe80::a84f:c3ff:fe8f:45b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever</code></pre>
<p>Has more information, but it is still fairly parsable.</p>
<h2 id="Identifying+virtual+Network+Interfaces" name="Identifying+virtual+Network+Interfaces">Identifying virtual Network Interfaces</h2>
<p>If you need to identify which network interfaces are virtua, use:</p>
<pre><code class="language-bash">readlink /sys/class/net/virbr0</code></pre>
<p>Results in:</p>
<pre><code class="language-text">../../devices/virtual/net/virbr0</code></pre>
<p>The output contains <code>/virtual/</code> in the path. Physical NICs would have something related to the bus
that the NIC is connected to.</p>
Keep e-mail private
urn:uuid:2674d311-8d1c-1a37-38cd-8e7f02eff0b6
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This is a handy tip.</p>
<p>If you don't want to give out your real email address to register for
a site and don't want to go through the hassle of creating a spam
email address, just point your browser to
<a href="https://www.guerrillamail.com/">Guerrilla Mail</a>
upon loading the page you'll have an automatically assigned email
address that is good for one hour.</p>
<p>Hopefully this helps reduce spam.</p>
Git Workflows
urn:uuid:f620760c-2408-73a7-a977-1c926bc2cf05
2024-03-05T00:00:00+01:00
Alejandro Liu
<p>This article describes my personal <code>git</code> Workflow.</p>
<h3 id="Start+working+on+a+Topic+Branch" name="Start+working+on+a+Topic+Branch">Start working on a Topic Branch</h3>
<p>This when we are implementing a new feature. Assumes that you have a working git repo.</p>
<pre><code>git checkout -b "topic" dev
git push -u origin "topic"
</code></pre>
<p>From a different computer, you may want to work on an existing work branch.</p>
<pre><code>git fetch origin
git checkout --track origin/topic
</code></pre>
<h3 id="Keep+Topic+Branch+current" name="Keep+Topic+Branch+current">Keep Topic Branch current</h3>
<p>While developing a topic we may want to bring any changes done to the dev/integration test...</p>
<pre><code>git checkout topic
git merge dev
</code></pre>
<h3 id="Merge+a+Topic+Branch" name="Merge+a+Topic+Branch">Merge a Topic Branch</h3>
<p>Once all the development and testing for a topic is done...</p>
<pre><code>git checkout dev
git pull
# switch to the dev (integration) branch
git merge --no-ff topic
# The --no-ff makes this a single commit.
# ... Update any changelogs and commit them...
git push
</code></pre>
<h3 id="Start+working+on+a+HotFix" name="Start+working+on+a+HotFix">Start working on a HotFix</h3>
<p>This when we want to fix a prod release bug. Assumes that you have a working git repo.</p>
<pre><code>git checkout -b "topic" master
git push -u origin "topic"
</code></pre>
<p>From a different computer, you may want to work on an existing work branch.</p>
<pre><code>git fetch origin
git checkout --track origin/topic
</code></pre>
<h3 id="Keep+HotFix+Branch+current" name="Keep+HotFix+Branch+current">Keep HotFix Branch current</h3>
<p>While developing a topic we may want to bring any changes done to the dev/integration test...</p>
<pre><code>git checkout topic
git merge master
</code></pre>
<h3 id="Merge+a+HotFix+Branch" name="Merge+a+HotFix+Branch">Merge a HotFix Branch</h3>
<p>Once all the development and testing for a topic is done...</p>
<pre><code>git checkout master
git pull
# switch to the dev (integration) branch
git merge --no-ff topic</p>
# The --no-ff makes this a single comit.
# ... update any changelogs and commit them ...
git push
git checkout dev
# We also want to add changes to dev...
git merge --no-ff topic
</code></pre>
<p>.. On another system....</p>
<pre><code>git remote prune origin
git branch --delete topic
</code></pre>
<h3 id="Finish+working+on+a+HotFix+or+Topic+Branch" name="Finish+working+on+a+HotFix+or+Topic+Branch">Finish working on a HotFix or Topic Branch</h3>
<p>If really done, or if you want to abort this...</p>
<pre><code>git branch -d topic
git push origin dev|master
# Use dev or master depending on being a topic branch or a hot
# fix branch respectively
git push origin :topic
# Delete the remote branch... Or ...
git push origin --delete topic
</code></pre>
<h3 id="Create+a+New+Release" name="Create+a+New+Release">Create a New Release</h3>
<p>We are ready for a new release...</p>
<pre><code>git checkout dev
git push
git pull
# Make sure that dev is up-to-date in both directions...
git checkout master
git push ; git pull
# Make sure that master is up-to-date
git merge --no-ff dev
# ... fix version number ...
git commit -a -m"preparing release X.Y"
git tag -a X.Yrel -m"Release X.Y"
git push
git checkout dev
git merge --no-ff master
# ... bump version number ...
git commit -a -m"Bump version to X.Y+1"
git tag -a X.Y+1pre -m"New dev cycle for X.Y+1"
git push origin dev
git push origin --tags
</code></pre>
<h3 id="Setup+New+Project" name="Setup+New+Project">Setup New Project</h3>
<p>For setting up a new project.</p>
<pre><code>mkdir project
cd project
# ... create files ...
git init
git add .
git commit -m"Initial commit"
git tag -a "0.0initial" -m "Initial commit"
git checkout -b "dev" "master"
git tag -a "0.0pre" -m "Development branch"
git push origin --tags</code></pre>
<p>This sets up a local repo with two branches and some descriptive tags.
The "master" branch for release code and the "dev" branch for
development and integration. We now need to configure it on the remote repository.</p>
<pre><code>git checkout master
git remote add origin "Remote repo URL"
git push origin master
git checkout dev
git push -u origin dev
git push origin --tags
</code></pre>
<h3 id="Setup+to+work+on+an+existing+project" name="Setup+to+work+on+an+existing+project">Setup to work on an existing project</h3>
<p>Setup clone:</p>
<pre><code>git clone "Remote repo URL"
git push origin master
</code></pre>