How About Those Hidden Costs?

Usually, when a business owner draws up their budget, or when they’re forecasting, they try to get all of the information necessary to make a decision. So they check as many costs as they can, see which ones they can control, see how they can turn a profit, and move in the right direction. Sadly, not all business directors (or, really, many people who try to look at their costs) can find all of the costs. At times, it could be that they can’t see how some costs affect their business.  Or, they don’t see the costs at all. The reason could be that they can’t look at the costs at different levels.

Let’s take the cost of food.  Many will say that the cost of food has increased. I have heard that it’s less expensive to eat out than it is to buy your own ingredients. But this is not always the case. A person can buy the ingredients and choose the size of portions which go into the meal. Unlike a restaurant, where their portion sizes tend to be fixed (or even decrease after about a year), a home cook can just throw in a few units of the ingredient, and have a meal. Heck, when I’m cooking something and using a recipe, I always slash the units that the recipe uses, and go with half the amount. I can save the ingredients for another day, or even make leftovers. But saving on the size of the ingredients and food is only a part of looking at it.

In a recent article on the BBC, it was said that an American woman in the 50s and 60s would spend hours in the kitchen preparing meals. Not all of these women were average; some were highly educated and came from well-to-do families. Food preparation in the home was the usual thing to do. But in the 60s, prepared foods, like TV dinners, came on to the scene. More and more the mother would spend less time in the kitchen. So is this a good thing? Again, it depends on your perspective.

Now that people don’t need to spend so much time in the kitchen, they are freed up to do other things. Things such as eating out, which is what many people do. The BBC article says that Americans spend more on food and drink outside of the home than inside the home. So when a family looks at their grocery bill and gasps at how high it is, they usually don’t contrast this with how expensive eating out can be. The food that the restaurant makes is usually great, but that comes at a high cost, too. The cost to acquire that food usually varies, but it may be a lower cost than the cost of buying it at grocery store (often times, it is). The restaurant, though, has to add in the cost of many other things, such as labor and rent, because they need to turn a profit.

How does this relate to other businesses? The metrics the business may use to determine how costly something is may only be one way of seeing it.  For instance, a company may look at how much power their servers use, which is usually measured in watts per hour. This metric can be used to determine whether to buy servers, or to outsource it (maybe for cloud services). While this is a perfectly good metric, one other metric to consider is how many times the proposed company has had outages at their datacenter, and how they’ve handled these outages.  One other metric (albeit a more involved one) is the cost of moving the company data once the contract has ended with the outsourced company. These are just some examples of how a company should look at possible costs differently.

Please note that I’m not saying that the decision-makers at companies don’t take as many costs into consideration. Ideally, they will take all of the important costs into consideration. The problem is looking at what costs really matter, and what the company can do about them.

Trying My Hand at Making a System for Procedurally Generating Stories

Last year, I got the idea of making a game where a group of friends could get together, have a game scenario generated for them, and they could start playing.  This is something called procedural generation, wherein game settings, mechanics, assets, and other components are created from an alogorthim, and a bit of randomness.  I thought I would just research this, since creating any sort of video game takes a lot of time and skill.  In this case, I was looking into a system for creating stories.

I’ve read a few things about procedurally generating stories (for instance, Tail-Spin and Minstrel) and I stumbled upon one that may be in my reach.  From a paper titled “Random Word Retrieval for Automatic Story Generation” I found out about something called ConceptNet.  This is a commonsense knowledge database for finding the relations between concepts via semantics.  So you can find one concept (like “dog”) and find corresponding concepts for it (such as “animal”).  The paper talks about the intent of making a system for using ConceptNet to find the relations between words (in a process called “Concept Correlation”) and make a story out of it.  Sadly, they haven’t yet implemented the system.  So what I’m trying to do is make something that will, uh, sort of make a story.

In the paper, they talk about how the system requires a Knowledge-Based System.  Unfortunately, this system is quite tedious and difficult to create (or so the paper says).  So I’m just trying to find the connections between words and concepts.  All that I’ve been able to do, though, is mess with the Python interface to the ConceptNet website, and maybe find some related terms.  Finding the connection between two dis-similar terms is difficult, because the concepts have many branching nodes which connect to other nodes.  Finding the right nodes which connect the two concepts would take a while, because the system would have to iterate over the nodes until they find a match.

The examples I’ve been using have been “dog” and “snow”.  So the system would have to go through each node in the “dog” concept until it found a node which connects to “snow”.  It could be any connection from “dog” -> hasA-> “nose”  -> RelatedTo-> “wet” -> PropertyOf -> “snow”.  Please note that these aren’t actual connections in ConceptNet, but something like this can be found in the database.

So I don’t know how I’m going to tackle this monster, let alone make a story out of it.

Newer Ways of Audit Reporting on Third Party Companies

I went to a recent meeting of the North Texas chapter of ISACA, and there was a presentation on SSAE 18.  For those of you who don’t know, SSAE 18 supersedes SSAE 16, and consolidates Service Organization Controls reporting into something more manageable.  Here, I’ll talk about what I’ve learned about SSAE 18, SOC 1, and SOC 2.

In SSAE 18, more emphasis has been put on testing the design of controls for subservice organizations (e.g. third parties for which the organization has contracted out some process) and whether they are doing what they are suppose to be doing.  The auditor, through Service Organization Control (SOC), has to report on the effective use of these controls.  In the case of a SOC 1 report, they would assist in testing the controls as they pertain to the financial statements.  With SOC 2, the auditor reports on the controls with regards to security, availability, integrity, and confidentiality.

Now the auditor has to look for things such as complementary user entity controls, which are the controls that are assumed to be in place for users.  The auditor will have to look at reports from the subservice company to the main organzation.  They will have to see whether the organization is actually verifying the information in the report.  For instance, the auditor will have to see system-generated reports are being validated by the users of these reports.  The processing integrity principle will be used heavily in this situation.

This audit will look at how management has chosen the suitable criteria for the control, and how well they’ve measured the subject matter (note that “subject matter” means the risk relevant to entities using the subservice company).  So an auditor will look at whether the risk is something related to the entity’s business with the subservice company, then check the metrics of the current control, and see whether they are actually related to the control.

Migrating My Nextcloud Server to Another Server

When I saw that my version of Ubuntu could not be upgraded any more (due to Digital Ocean’s system), I had to migrate my Nextcloud installation (version 11.0.2) to a newer version of Ubuntu (16.04). In the documentation for Nextcloud, it details how to exactly migrate your Nextcloud installation to another Linux server, so I tried using that. Sadly, though, they left a couple of things out. So here’s my version of how I migrated my Nextcloud installation to another server.

First of all, they say to back up your data, which is what I did (to some extent). Next, I spun up a droplet which had similar specifications to my previous installation (it uses 512MB and has 20GB of storage, for instance). I made sure to put in my usual ssh key for this new droplet, as well as securing it in other ways. At first, I thought I knew how to easily do this: copy over the /var/www/nextcloud directory to the new server (which was also set up for Nextcloud), change some directory names, and it would be done. However, this was not what I found.

When I tried this method, I couldn’t access the web interface. What was worse was that I had forgotten to change the firewall configuration on the new droplet and accidently “locked” myself out. So, through some more research, I tried it again with another droplet.

This time, I copied the files using rsync, and made sure that I used the proper switches (I used the “-a” and “-t” switches, in order to archive the files, as well as save the timestamps). I saved the files to a new directory on the server, and made sure to back up the old files. I thought that this time I had fixed the issue. But Fate can be cruel.

Even though I had copied over the files in the “correct” fashion, the server still wasn’t accessible from the web. Looking at the /var/log/apache2/error.log file, I found that the webserver couldn’t start due to Nextcloud not be able to read the database. After researching the problem more, I learned that the data can’t be just “copied” over; rather the data has to be copied, and the database has to be imported. So, after scrapping that droplet (I had changed it too much already), I spun up a new droplet, and tried this whole thing all over again.

First, I put the server into “maintenance mode” and stopped the Apache server. Then I extracted the data from the database (via the command mysql -u ownCloud -p password ownCloud > /tmp/dump.sql), copied that over to the new server, and imported it into the new database. For importing the new database, I “dropped” the old database, created a new one, and finally imported the data with the command mysql -u nextcloud -p password nextcloud < /tmp/dump/sql. Then I copied the old Nextcloud as I did before (using the official documentation’s recommendation of the command rsync -Aax) and carefully moved the files into the new /var/www/Nextcloud directory. Even through all of this, it still wasn’t enough.

Looking again at the /var/log/apache2/error.log file, I found that the new Nextcloud install couldn’t read the database due to the new server using the old Nextcloud config.php file (are you still with me?). So, I changed a few values in the config.php file so as to use the new database. What I did was change the dbname to the name of the new database, as well as input the new database login information. Also, I added the new IP address to the “trusted hosts” array in the config.php file. This seems to have fixed most of my problems.

Just in case, I also ran a script that I found in the official Nextcloud documentation for fixing the permissions on the new server. Then I changed some of the security configurations for the Apache server. I copied over previous configurations for SSL into the /etc/apache2/sites-available directory, and enabled them through the a2enmode command. With all that finished, I started up the Apache server again, and took the Nextcloud installation out of “maintenance mode”. Finally, I was able to use the server. Except, it wasn’t exactly to my liking.

You see, I had copied over the “data”, “config”, and “themes” directories. I did not copy over all of my previous apps. When I saw that I had only half of my previous apps, I thought, “I need to fix this,” and copied the contents of my previous Nextcloud’s “apps” directory to the new server. With those out of the way, I saw to using some of Nextcloud’s recommendations, and bumped up the memory (as well as putting in a timeout feature). These were harder to fix, partially because I had these problems with my previous Nextcloud installation. Nevertheless, I sought to fix them.

One of my problems was with the /var/www/nextcloud/.htaccess file, specifically that it wasn’t working. To fix this, I edited the /etc/apache2/apache.conf file and changed the AllowOverride directive for the /var/www section to “All”. This allowed the /var/www/nextcloud/.htaccess to work (at least, it would work since I’m accessing the site over a secure connection). Next, I added a memcache so as to speed up performance. I added in php-apcu and edited the /var/www/nextcloud/config/config.php file to reflect this by assigning memcache.local the value \\OC\Memcache\\APCu. My server was made much faster with this tweak. For added polish, I followed the directions on this tutorial and added Redis support.

This was a tough migration that I thought would have been easy. I figured that it would have taken a couple hours. However, it was stretched out to a numbers of hours (not to mention one night). While it’s great that I have migrated to a more manageable configuration, I should have done more research.

Helping Out a Local Charity

I’ve been helping out a local charity with preparing tax returns for the needy and underprivileged for the past few weeks and we’ve run into a problem. Each time we have to print out the tax returns for the clients, we have to take the laptop over to the printer to have it printed out.  This takes a while, and it can be a royal pain.  So I have suggested setting up a small print server so that the laptops on the WLAN can easily print.  My initial set-up looks encouraging: I have set up a Raspberry Pi as a little print server, and have successfully printed from one of the laptops.  With some security measures and other set-up, the other laptops that the tax preparers are using will be able to use this print server, too.

Learning About Setting Controls for I.T. Assets

In my pursuit to get into the information technology (IT) audit field, I must learn about setting controls for securing IT assets, minimizing risk, and eventually testing that said controls work.  In major organizations where information flows constantly and is utilized to advance the organization’s goals, ensuring that the information and knowledge are accurate, intact, timely, and secure are important.  To secure them, though, management must know how this information and knowledge can be lost.  Once they understand this, controls must be put into place so as to prevent this loss.  But management cannot always safeguard these assets.

As a company moves along in its financial year, these controls can break down.  For example, backups can be corrupted (losing information), and employees may leave the company (thus losing knowledge).  So it is also good to reassess whether these controls are working as intended.  This is where the IT auditor steps in, to evaluate these controls, and see to it that that continue to do the job.

Though I know of some ways of testing these controls (e.g. vouching, interviews, and walkthroughs), I have never carried them out.  All I have done is study them.  While studying textbooks is fine, some would say the true teacher is experience, and I have not done so.  For the most part, I have managed a couple of websites (this one included) so that they cannot be hacked.  I have put controls in place to ensure that my website is not compromised.  But to make sure they work, I must turn to someone who has had experience in managing a website.  Not just that, but an IT auditor who will teach me what to exactly do so that this website is not damaged.  Eventually, I would learn more from them so that, when I am on an audit engagement, I can ensure that the company’s valuable assets are kept safe.

MyCroft, AI, and how I’m trying to help it

The other day, I saw that a version of MyCroft was released for the Raspberry Pi.  I have been following MyCroft for a while now (mostly through the Linux Action Show) and have tried using it.  The software is still in beta, so I found some bugs with it, namely I can’t really use it.  I have tried pairing it, then talking into it with my microphone.  But it can’t understand what I’m saying.  At first, I thought it had something to do with getting a secure connection to the MyCroft backend servers.  Now it could be a problem with my microphone.

I’ll admit that my desktop microphone isn’t the best.  But how much clarity does the microphone require?  Apparently, a lot.  The microphones on the Amazon Echo, for instance, can pick up a bunch of channels of sound.  So it looks like I’m going to have to get a better microphone.

What I’ve also seen is that MyCroft uses Google’s backend for the speech recognition.  It looks like they’ll go to something such as Kaldi, but that doesn’t have a large enough speech model to get the job done.  While it has a model based upon over a thousand hours of speech, it may require thousands of more hours of speech just to get better results.  I’ve been donating to Voxforge and trying to help with their speech corpus.  However, they’ve barely got enough for half of their first release.  So I was wondering how to speed things up and get them more samples.

What they could do is make it fun and interesting to donate.  I was thinking of something like a badge system on the Voxforge website, or even leaderboards.  Then again, would this make it fun to donate?  I need to think more on this.

Interesting video on designing programming languages

Yesterday, I started watching this video on programming languages, and it took me over forty minutes to stop watching the video. It’s not because that the video was over an hour long, but rather the subject matter of the video.  It’s a presentation by Brian Kernighan titled “How to succeed in language design without really trying”.  The presentation by professor Kernighan was very well done.  He went through a bit of history with how some programming languages came about, as well as their usees.  He also talked about his time with Bell Labs, and how he, along with two other great programmers, wrote the language awk.  The video had me interested because, for one, I could understand half of what professor Kernighan said, and two, he admitted that he threw the language together out of necessity.  Also, he would, at times, remind the audience of his short comings, such as with functional programming languages, and remembering how to program in C.

Made My Own NES Classic Console

It looks like the NES Classic is sold out every where, and there are scalpers on eBay trying to bilk old fans out of their hard-earned coins.  Now, I don’t want to get an NES Classic on account of owning a couple of the featured games, as well as owning them on the Virtual Console.  But since the lack of want doesn’t stop me from tinkering, I made my own.

It’s quite easy to make a tiny device which can emulate and play NES games; it’s already been a reality for a long time.  In my case, I took a Raspberry Pi 3, got the official Raspberry Pi touchscreen, a case to contain these parts, an old SD card with an install of RetroPie on it, and a Classic USB NES controller.  And just for shits and giggles, I also hooked up the whole thing to a 20Ah battery so that I can play it on the go.

Frankenstein’s fun machine

This was a fun little project, but it does have its setbacks.  The Raspberry Pi, along with the other attachments, draws a good amount of current, and so has a problem with voltage (that’s the reason it has two USB connectors).  Also, if you want good sound, you’ll have to use a different sound output; the on-board audio jack is terrible.  Then there’s the price: this little beauty set me back around $200.  So while the NES Classic will set you back $60, at least that’s an official machine, and has a few bells and whistles.  Still, this device is easily configurable, and I can add as many games as I want.  So it not just plays NES games, but also SNES games.

Dealing With the Internet of Things

The other day, I attended a meeting of the North Texas chapter of ISACA.  There, the information technology veteran, Austin Hutton, gave a presentation on the dangers of the Internet of Things (IoT).  I have written about the IoT and how it can be used to devastating effect.  One of the problems that Hutton talked about is that there are more IoT devices than there are people on earth.   Thousands are being manufactured and sold each day, and each one of these devices can be hacked to assist in an attack.  And the problem is getting bigger.

Most of those devices were poorly designed, and thus have no way of being updated.  The companies who make these devices have thin profit margins, so they cannot afford to make them secure.  In some cases, the manufacturer buy the chips from other companies, so they are not directly responsible for its security.  The average IoT device can be easily hacked: a number of them have easy to crack passwords, or have flaws that were not detected when they were being designed.  There are even programs which can auto-hack some of these devices.  All the hacker needs to do is learn the make and model of the IoT device, select the program, sit back, and gain control over it.  For those devices which are used as intended, they may be doing something illegal.

Hutton gave the example of a Tempur-Pedic bed which can send the user’s data back to Tempur-Pedic for analysis so as to improve the user’s experience.  He then gave an example of someone else (specifically, his 14-year-old granddaughter) sleeping in the bed, and their data being sent to Tempur-Pedic without their permission.  This can be considered breaking the law because she’s a minor.  How would that situation be resolved?  How can we at least minimize the damage from IoT devices?

For one, education.  Though companies are really selling the convenience of IoT devices, consumers must learn how harmful IoT devices can be.  The public needs to learn that these devices can be used to cause harm to our cities, and possibly to themselves.  Recently, the business of a utilities company in Finland was disrupted due to a DDoS attack, resulting in the heating for their customers being disabled.  What if this was the smart thermostats of many of their customers getting hacked?  The attacker could lower the temperatures in these houses, or disable the thermostat, which would be a dangerous situation to homes in Finland during the winter. How else could these devices be attacked?  An attendant to the meeting, David Hayes of Verizon, had one other scenario.

There are utility companies in North America and Europe that use monitors called SCADAs which can remotely control machines vital to a functioning city (one example is the water pumps which keep drinking water flowing through the city).  What if, Hayes suggested, a hacker takes control of these pumps, and threatens to take them offline, or even increases their work to the point of destroying them, unless he is paid $100,000?  Now we’re starting to see the cost of this problem.  This cost will only increase, as malicious hackers devise ways of misusing these IoT devices.

Another way we can minimize the damage from IoT devices is to ensure that your IoT devices can be modified such that only you can control it.  If you can change the password, do it.  Check that a default root password hasn’t been hardcoded into the device.  If you can, find a device that can be updated (though few IoT devices have the capacity to be updated).  On the government side, we’re going to need some form of  oversight.  For instance, no IoT device bought by  the government can lack the ability to be updated.  How about current IoT devices?  There is little we can do about them.  If we’re dependent on them, then it’s going to be difficult to replace them.  Maybe for the average person it’s easy to change their IoT lightbulbs.  But how can a maintenance manager at a company tell his bosses that, due to the threats these IoT devices have to the security of the company, they all have to be changed.  How much will that cost?

This is a growing problem that will grow more as these hacked IoT devices are used to facilitate these attacks.  It is imperative that this problem be addressed now, rather then have some catastrophe occur, and involve the lives of thousands.