Archive for the ‘Things of Interest’ Category

unRAID and How I Do Things

I recently started using unRAID for my server at home, and I figured I’d do a rough write-up of how I configured mine and some design decisions I made along the way.

What is unRAID?

For the uninitiated, unRAID is “a scalable, consumer-oriented, server operating system.  Traditional approaches to personal computing technology place limits on your hardware’s capabilities, forcing you to choose between a desktop, media player, or a server.  With unRAID, we can deliver all of these capabilities, at the same time, and on the same system.” What that means is unRAID is a platform from which you can store and serve files, and run applications to manage and stream those files for you (such as Plex or Kodi) using Docker or a fully virtualized machine with whatever specific applications you want installed.

How did I build my network?

Part of the perks of upgrading to unRAID on a new server (as opposed to running a physical server with all of the applications I needed installed in an ad-hoc manner) is that the previous server was freed up to do other things.  Since my old server had two ethernet ports and was still more than capable, I converted it into running OPNSense, an open-source fork of PFSense, a network and security appliance.   The network is pretty unexceptional from there, although I do connect to my VPN via OPNSense and route traffic to it, so any host on the main fabric can natively reach VPN resources

How did I build my system?

My system build is pretty simple, with some basic goals and ideas:

  • All things equal, I want the lowest power consumption possible
  • Again, all things equal, I want the lowest footprint possible
  • The system must be capable of transcoding 4 streams at once
  • The system must have about enough memory for at least two medium-sized VMs, and 10 Docker containers
  • The system must have enough storage to accommodate running the VMs and Docker containers (but it does not need to store all of my files, since I already have a NAS)

I went with a Skull Canyon NUC with 32GB of DDR4 RAM and 2 1TB M.2 SSDs. I also used an off-the-shelf 32GB USB flash drive for unRAID’s OS (which is overkill, since it should really never use more than 1GB).  The system stores no media files, only application configurations, ISOs for VMs, Docker volumes etc, and once fully spun up I’m at 25% RAM used and 33% storage used, which should allow me room to grow and an adequate amount of storage for logs.  Also, the system being quad-core HT means it should have enough processing power to run all VMs, Docker containers, and burst transcode 4 streams with a little bit of overhead. Also, since I’m using DDR4 and M.2, the system’s bottleneck will probably always be CPU, as access times for application-related storage and RAM should be very low. The bottleneck to the NAS is more than acceptable since generally, a user will only need one file to be accessed and the difference waiting 5ms or 70ms for a single file to start streaming is imperceptible to the end-user.  Batch operations are infrequent and can happen in the background.


What do I run on my system?

I run Plex, OwnCloud, Nginx and a number of supporting applications, nearly all containerized.  Nginx serves as a convenient reverse proxy to add SSL support to all backend web-serving containers and to add authentication for every service.  In addition, I have a few different monitoring/performance containers such as netdata, Zabbix and cAdvisor.  Configuration volumes are mounted to the container from the unRAID shares, and storage mounts are mounted to the containers as volumes using the “Unassigned Devices” unRAID plugin.  In addition, I run Splunk on a VM.

How do I monitor my system?

This is where most of the actual work took place in the migration, since my previous designs assumed a lot about the system providing critical services (such as the ability for Zabbix to see all processes for example, which is not the case for a Zabbix Docker container).  I did some re-thinking about what my monitoring needed to look like and decided I really only needed to know:

  • When services were unavailable
  • When the system itself was unavailable
  • When the system itself was misbehaving (high CPU, low/no RAM etc)

Since not all services provide an externally-facing HTTP endpoint (such as my custom plex-status container), I needed to be able to either monitor log output and alert on that, or monitor process listings (which Zabbix, as configured in Docker, cannot do natively).  What I ended up configuring was HTTP endpoint checks from the Zabbix server to each service (over my VPN), and for the services that don’t expose HTTP endpoints, I hit the cAdvisor API (again over VPN) and search for the process in the process listing (since cAdvisor mounts the system’s /proc inside the container, it sees all processes).  I’ve also begun using netdata for performance alerting, which duplicates some features of Zabbix but alerts directly to Discord instead of my default Zabbix action, which is email.

How do I handle logs?

Normally, unRAID catches Docker logs and exposes them in the web UI that you can watch in real-time.  This is great for debugging containers, but once you put a design in place, how do you read logs long-term? My solution was to use Splunk, as Docker has built-in logging facilities to send logs to Splunk.  In each of the “Extra Parameters” for my containers, I add

--log-driver=splunk --log-opt splunk-token=xxx --log-opt splunk-url=http://xxx:8088 --log-opt tag="{{.Name}};{{.FullID}};{{.ImageName}};{{.ImageFullID}}"

This ships the logs off to Splunk, where I can ingest them and search on them.

What challenges did I face and what pitfalls did I have to address/still have?

First off, the biggest challenge to the migration was simply time; since my previous server had gradually grown in responsibilities and apps running on it, each migration of those responsibilities took time, even if it was (best-case scenario) just 15-30 minutes to spin up a container and restore configs.  I also spent a lot of time struggling with containerized Splunk (and eventually decided to run it on my VM), giving the Zabbix container access to the underlying system even though it was unaware it was running in a container (which I opted not to do in favor of the cAdvisor approach above), and other ultimately unfruitful paths/ideas.

I also have some deficiencies in design that I have to be cognizant of, or find a way to address in the future.  For example, the Docker containers ship their logs off to Splunk, but Splunk itself is running in a VM; what happens when the containers come up before Splunk? I lose the logs.  What happens when the containers can’t reach Splunk? Well, if I’m using a DNS name (which I am) and the name doesn’t properly resolve, it will prevent the containers from starting.  Otherwise, if it gets an IP but it’s wrong or out of date, I also lose the logs.

Speaking of logs, many containers assume a logging level that is more verbose than expected.  For example, cAdvisor logs a performance per each container(+VM) per second.  When you have 15, that means 15 events per second, which (if you use the free 500MB/day Splunk, like I do) will cause you to go over your daily logging limit very quickly.  That was an easy fix, though, it just required adding –housekeeping_interval=15s to the launch options of the container.  On that note, adding container options (as opposed to Docker options) requires adding them to the end of the “Repository” field in unRAID, since the “Extra Parameters” field is for extra Docker parameters, not parameters to pass to the container.

Am I happy with the ultimate product?

Yes, it’s much easier to manage and upgrade, and it is less friction for future expansion.  In addition, by becoming reliant on Docker, I can make changes or add features and share them with the rest of the Docker community.

What does it look like?

Skull Canyon NUC

Some observations

  • Men are commodities, women are products; we are sold to one another based on idealized premises.
  • Advertising is the art of barraging the sensibilities until they are desensitized.
  • (Therefore,) you can’t advertise integrity.
  • Truth needs no witness.
  • One does not need to be able to describe ones feelings in order to possess them.
  • Words are a compass, not a map.  They can only provide direction, it is up to the observer to experience the landscape.
  • Introspection is not a passive act.
  • You can’t answer unasked questions, only make unsolicited statements.

More to come, maybe.

Understanding Complexities

When we understand something, we create a model in our brains using our previous experience’s ideas, objects, systems and vocabulary. While this gets us close, it’s like using only chess notation to describe things. For some things, like chess games it works perfectly. But for others, like how you feel after a warm summer rain, it’s extraordinarily inadequate. So too, do we try to explain people, the universe, life using “rough” terms. We are happy, we are sad, we are lonely, we are proud. But these words mean different things to different people, and they describe things that could be vastly different from one another. Is being proud of oneself the same as being proud of one’s offspring? Being proud of one’s nation?

Taking this to the extreme, should we stumble upon the true nature of the universe, will we even have the language to describe it to ourselves, let alone each other?

TCPPing returning empty output

After setting up Smokeping + nginx with fcgiwrap I also added TCPPing and while it worked on a different instance, this time it would just output blank lines at one second intervals.  Since a search didn’t turn up the results, I figured I would throw together this post to give the Internet some more search fodder in case someone else runs into this.

TL;DR: Install tcptraceroute

This post was what actually solved my issue, specifically the section at the bottom:

>To make things a bit more confusing, there are various symlinks 
>(including /etc/alternatives/tcptraceroute) and shell scripts
>to offer compatible command line interfaces.
>In your case, just installing the `tcptraceroute' package should do

It seems that at least on Ubuntu based systems, /usr/sbin/tcptraceroute is not the tcptraceroute that TCPPing expects and causes this behavior.

It’s regrettable that TCPPing isn’t more than just a quick script thrown together because then it would hopefully be able to throw a meaningful error message but such is the price of free.


Since the NSA debacle I’ve been pondering the current state of affairs regarding privacy and “How most people think the Internet works” vs “How the Internet actually works”. So here are my thoughts on the three subjects:


You have none.  There are groups of people and multiple individual entities that can access data you do not want them to access without your approval and most times without you knowing.  This additionally is not a static list, but an ever-changing number and user base.  I will expound on this in the “How the Internet Works” section, but suffice to say if you are educated about the Internet, you should have no reasonable expectation of privacy, at any time.  This is partly by design, but also partly for ease of administration.  If you have no idea of what kind of traffic your network is routing, how can you effectively shape it for best performance of all users? Or, why would you remove administrator access to a terminated user’s account if they may have saved those files they were working on before they left the company?

It comes down to the simple fact that we take the shortest route to achieve our goals and when those goals are closer to “make it work” than “make it work correctly”, it shouldn’t surprise you to see what we see here.  Is it necessary to improve security for the sake of privacy? Instead of answering, consider the fact that with only your name, date of birth and the hospital you were born in, researchers found they could predict your social security number correctly for 8.5% of the population with fewer than 1,000 attempts. The system was originally designed with one scope in mind and neglecting possible attacks of this nature erroneously.  What SSNs have now become is authenticators instead of identifiers, which means more people can get more access to more of your data.

How most people think the Internet works

Through conversations with people about the Internet, it seems people believe the Internet works like a literal web, with direct 1-to-1 connections to all of your favorite services, like a super-long Ethernet cable for each website to everyone’s computer.  It seems people also think a firewall will actively block attacks and unauthorized persons, like a 24/7 systems administrator that “knows” when the network is under attack.  They think their traffic is mostly unreadable in-transit, and trust online services with their livelihoods.

How the Internet actually works

There are millions of miles of cable, built into walls, floors, under roads, underwater, in ditches that connect routers over which Internet traffic flows.  Sometimes, it bounces off a satellite for good measure.  The owners of these devices are countless; for example just to get to Google you need to talk to your local cable pool’s router, then to your cable company’s core router, then their upstream provider’s router, then any number of intermediary datacenter’s router, then to Google’s datacenter’s router, then to Google’s router, then it gets sent to the relevant server, passing through additional routers on the way.  At any time, the operators and many of their employees can see your raw (usually unencrypted) traffic.  This is using cable that can be decades old, protocols that sometimes were intended for an entirely different purpose,  software hacked together (sometimes actually designed) by disgruntled, happy, hardworking or lazy employees (or freelancers) sometime in the last half-century. Also on this global (read: unregulated) network are pieces of software with the specific intent on doing things they are not supposed to, and you’re directly connected to them, as is your data.  Also connected to this network are HVAC systems, security cameras, industrial machinery, telephones/fax machines, medical devices etc that have not been (or cannot be) updated.  On top of all of this, we now have definitive proof that this network can be manipulated and studied as a result of one or many governments to glean data and perform offensive cyber operations.

It’s not a surprise that there are so many breaches of security when security hadn’t even crossed the mind of many of the designers of integral parts of the Internet.

How do we cope with this, and how do we fix it?

The first step on any successful project is to determine the scope, and unfortunately ours is gigantic here.  We would need to built authentication and privilege mechanisms (ones more advanced and robust than current offerings) down to the silicon level of every device, which of course would necessarily render older equipment incompatible.  Using authentication schemes yet unknown, we must construct a suite of software and hardware that enforces abstract security concepts in a concrete and consistent way, globally and with as much compatibility as possible with current infrastructure.  This system would need to place absolute power in the owner of data, and recognize complicated ownership processes like loaning/leasing data and limited access.  All of this in a way that is easy-to-use and transparent, using powerful, proven cryptography that can be upgraded and changed as technology progresses with interoperability and guaranteed security.  This needs to, at a bare minimum, provide Identification Authentication and Authorization for any given piece of data and on the human/company/device/entity level.  Then, once everything is in place we would still need to configure it for explicit permissions, ones which are the minimum needed for full function at every level.  This is only after first deciding the entire organizational structure of electronic devices and planning out what devices’ function and level of permissions need to be.

Sounds easy, right?


So I started getting more into code, and while unfortunately my synergy project for android will probably go nowhere, I will now start posting other code to my github page! So, check it out!

Here I go on another philosophical tangent

So I’ve been thinking about 9/11 recently, and I think it’s important, really, to “Never forget 9/11”. Just like Pearl Harbor, racial segregation and the Boston tea party, it’s a historical event that can teach us a good deal about increasing the longevity of our country and even our own. I must pause here and say I didn’t want to write country, because I feel patriotism is a queer act that should never be taken seriously, nor disregarded. I feel one should be patriotic for the human race (suck it Neanderthals), but not for a patch of land on a turd of iron. With that said, consider this train un-derailed.

So the lessons 9/11 can teach us are some of the same lessons that Pearl Harbor, racial segregation and the Boston tea party can as well. *Didn’t think I’d tie them together? I got this, just watch. First, why did 9/11 happen? It wasn’t because Osama Bin Laden hated our freedom. It was because of our foreign policy (no, really). While it’s not totally relevant, “The motivations identified for the attacks include the support of Israel by the United States, presence of the U.S. military in the Kingdom of Saudi Arabia, and the U.S. enforcement of sanctions against Iraq.” What this boils down to is we were seen as oppressing freedom in the middle east. In other words, people’s rights were viewed as being infringed by the Al-Qaeda. Now, getting into the specifics like why we support Israel, why we have military in Saudi Arabia and why Iraq was sanctioned aren’t quite relevant, but through the link earlier, you can learn these things for yourself.

Now we have the motivation; how did it happen? Simply put, terrorists hijacked 4 planes and used them as missiles filled with people for destruction. How was a thing like this possible? Not only simply through a lapse in security, but the fact that securing a nation against all threats, including ones of this magnitude, is simply not possible 100% of the time. What can we do to prevent or mitigate this kind of risk? I’ll get to that at the end. What did we do? Create the Department of Homeland Security, a government entity that had a budget of nearly a hundred billion dollars last year. I say “we” because the people only had a say of who was in charge at the time, not what they did, so “we” is really congress, the senate, and the president.

Before I go further, I’d like to briefly discuss the other mentioned historical events, namely their cause and result.

The Attack on Pearl Harbor
Cause – Culmination of Japanese invasions and expansions of power, including the invasion of Manchuria. The Japanese wished to disable or cripple U.S.’s naval presense so as to allow for continued expansion.
Effect – The U.S. became involved in World War II, and the internment of thousands of Japanese-American citizens.
Racial Segregation
Cause – Financial motivations, bunk science and ingrown racism. If you read no other links, read this one.
Effect – Blacks in the U.S. not only were disenfranchised, but were legally subdued in their rights as citizens of the U.S.
Boston Tea Party
Cause – Actions by the British government that expanded control and dependence of the colonies on the British government
Effect – Increasing sanctions by the British government and the eventual start of the American Revolutionary War.

In each example, the cause has a common theme; that is the oppression of a people’s right to life, liberty or the pursuit of happiness.
In each example, the act has a common theme; that is, the reaction of the people against those who do he oppressing.
In each example, the end result has a common theme; that is, the overreaction of those in charge to maintain order and power over people.

Now, some of that is a bit of a stretch admittedly, however each could be supported by historical events. What each of these events are are opportunities to learn about how to prevent similar events and peer inside humanity itself to better understand how to improve ourselves. From these examples, we can hypothesize a few things:

Those in power, wish to stay in power.
It is very easy for those in power to abuse their power
After the oppressed revolt, those in power tend to exert their power to an even greater degree.
This is either countered with all out aggression or those in power succeed in retaining their power

The things we can learn from all of this is that first, it’s important to recognize oppression and speak out against it wherever it occurs. Second, when an entity gains too much power it becomes inherently dangerous and thus must either be divided or resisted against. Lastly, from all of these events we learn that wherever there is injustice there will always be those, however few they may be, who speak out against it. And even while their voices may not reach us through the looking glass that is history, their words and actions still mattered, and they acted as a people who recognize that freedom and liberty are of paramount importance. And this is our lesson, today, that it is our duty as citizens of the world, as humans on our rock, that we work to guarantee life, liberty, freedom and equality to all people.

So don’t forget 9/11, because we’re not done being affected by it yet.

I’d normally post this to the links section on the right, but …

… this is just too interesting not to comment:

Basically, this is one of the most well-engineered pieces of malware; so much so that researchers still don’t know how it spreads. I highly recommend you read the whole thing, but some highlights about the malware:

  • Cryptographically obfuscated payload – the key is the configuration of the target machine.
  • Unknown attack vector
  • Well-engineered load-balancing of C&C servers
  • Inexplicable other behaviors, such as installing a new font (?)

The bottom line is this is the most interesting piece of malware I’ve seen in a long time, all seemingly from the authors of Stuxnet (supposedly the US or Israeli government).

On Knowledge and Behavior

So, I’m going to wax philosophical for a bit here and talk about knowledge as it pertains to behavior and life experience. I’ve found that some of the best behavioral changes in my life have come from things I’ve known my whole life, such as moderation in food portions and how to respond to and participate in social situations. Even though these things were in my head and I could recall them at any time, I didn’t change my behavior or even try to after a long time during which they just sat idly, reminding me that I was doing things wrong.

What I try to do however is learn and continually improve myself, so I’ve kind of set out to learn the process of learning and how it pertains to self-modifying behavior and to understand what inhibits it so I can make better progress quicker with behavioral modification.

It’s difficult to explain but it seems there is two kinds of knowledge, external and internal knowledge, that is knowledge that originates from external sources and knowledge that originates internally. These names are kind of misnomers, because you can recall both kinds of knowledge without external assistance, however internal knowledge is things like tying your shoes or what to say when someone greets you while external knowledge is another person’s age or how many million miles the Earth is from the sun. Again, it’s difficult to explain. Internal knowledge is easier to recall but more difficult to update, and in general it can pertain more to one’s behaviors whereas external knowledge is usually things like statistics and facts.

What I have found is even though you have received external knowledge relating to behaviors you wish to modify (such as eating smaller portion sizes or things such as performing hourly reality checks), it is very difficult to modify the behavior, even with good focus, concentration and willpower. One way to successfully accomplish behavioral modification using external knowledge is to constantly focus on the behavior you wish to modify until it becomes second nature, making sure to keep it on the forefront of your mind at all times. Many times, this is what happens to musicians as they may learn a song incorrectly so in order to re-learn the song correctly, they must practice it constantly until the new song is part of internal knowledge.

This approach only works in specific instances because we are not processors and we won’t always remember during our “interrupt times” to check the list of things that need to be checked.
Another way to internalize external information is to consider the new information as deeply as you can, perform self or group arguments against the proposed changes and gradually fade out the old methods. This is one I try to employ but even though it appears more effective than just forcing yourself to always pay attention to behaviors you wish to change, this technique is greatly flawed because sometimes you will arrive at a conclusion but you’ll be unable to internalize it and modify your behavior for a number of difficult reasons; such as the old behavior was easier or the new behavior makes you uncomfortable.

The final method is to internalize the information through experience. This is something I feel most people have issues with because constantly older people will warn younger people about their unhealthy habits or give advice to young couples that the couples can’t follow because they haven’t learned from experience. I feel experience is the major source of internal knowledge, because some things you have to “learn the hard way”.

What I want to do is change that and learn the easy way. One technique I hope to try is to analyze the differences between the behaviors and create small steps that will help bridge the gap but are easier to implement. For example, if I wanted to start working out daily, I would come up with a large number of small changes such as stretching when I remember, then stretching every day, then doing a little more rigorous physical exercise when I feel like it, then exercise on the weekends, then maybe I’d be able to make the jump to daily exercise. It would be more steps than that but hopefully you get my drift. One other large stepping stone in the way is just deciding what you want to do, and if you want to actually make that permanent change.

For example, at my heaviest I rationalized it that I enjoyed food more than I would enjoy being a healthy body weight, so that prevented me from losing weight until that rationale was overthrown.

In conclusion (and I hope to have the time to edit this so it’s more coherent), I think that behavioral modification is easiest when it’s planned out and there’s true initiative behind it. Maybe in a future edit or a future post I’ll talk about the inclusion of external factors such as android apps and other people, but as it is this post is longer than I’d want.

OpenVPN Windows 7 Network Issues

So I spent an hour and a half fixing this, so hopefully someone comes across this post and it helps them.

My issue was that the OpenVPN adapter on my Windows 7 computer was an Unidentified network and I couldn’t change it. Because of this, it didn’t follow the right firewall rules, making it impossible to RDP in over the VPN.

So, after much searching high and low, the fix is very simple, just add the following lines to your client config file:

# NLA issues
route-metric 512

And restart the VPN connection.

Thanks to this site which was very difficult to find.

Return top


I make no guarantees or warranty of any kind as to the accuracy or usefulness of any information posted here. In addition, all opinions are my own and do not necessarily reflect those of any other individual/entity, including but not limited to my employer, family or friends.