Auditing a charity’s network, and finding something out of place

For the past six months or so, I have been helping a local charity with its I.T. needs. This includes updating their computers, designing and setting up a kiosk for its volunteers, and helping other charity members with their IT needs. Now I’m trying to map their network, and help the director of I.T. to ensure that all devices (desktop computers, printers, external hard drives, etc.) are accounted for.

About two or three weeks ago, I used nmap to do a quick scan of the local network, and check the devices on the network for what they were broadcasting, what ports were open, and just what that exact machine is. After finishing that, I met with their director of I.T. to discuss my findings. He verified most of what I found (we had a problem with one of those Western Digital MyCloud hard drives, but that was soon cleared up). But there was one part which baffled the both of us.

Just like many other small organizations, they use old phones. The ones they use are Avaya IP phones (i.e. voice over internet protocol [VoIP] phones), model 1616. I don’t know when they got these phones, but they’re old. When I scanned these phones, I only got their IP addresses; there was no hostname. However, on one particular phone, I found a hostname. It’s not a hostname you would find on any of the other computers on the network (they had hostnames that ended in “.local”). This one had the hostname “6lfb7c1.salmonbeach.com”. How it got this hostname, I am not entirely certain.

At first, I thought these were phones with a few features (voice mail, call fowarding, conference calls, stuff like that). As I have found out, these phones are fully featured, and have upgradeable firmware. The phone in question, the model 1616-1 BLK, gets its firmware from the local Avaya phone PBX server. Since it gets its firmware from the server, how can the hostname be changed? In the settings for the phone, the hostname can’t even be changed. One of the members of the charity’s administration said that they had problems months ago with the voice mail system. But I doubt that’s related to this problem.

So how should I approach this? Has it been hacked? Is it just a software glitch? Hopefully it’s nothing serious. The I.T. director said that he bought a bunch of these old phones on the cheap years ago, and he’ll look into flashing the firmware on the phone. So let’s hope that’s the last we’ll hear of it.

Newer Ways of Audit Reporting on Third Party Companies

I went to a recent meeting of the North Texas chapter of ISACA, and there was a presentation on SSAE 18.  For those of you who don’t know, SSAE 18 supersedes SSAE 16, and consolidates Service Organization Controls reporting into something more manageable.  Here, I’ll talk about what I’ve learned about SSAE 18, SOC 1, and SOC 2.

In SSAE 18, more emphasis has been put on testing the design of controls for subservice organizations (e.g. third parties for which the organization has contracted out some process) and whether they are doing what they are suppose to be doing.  The auditor, through Service Organization Control (SOC), has to report on the effective use of these controls.  In the case of a SOC 1 report, they would assist in testing the controls as they pertain to the financial statements.  With SOC 2, the auditor reports on the controls with regards to security, availability, integrity, and confidentiality.

Now the auditor has to look for things such as complementary user entity controls, which are the controls that are assumed to be in place for users.  The auditor will have to look at reports from the subservice company to the main organzation.  They will have to see whether the organization is actually verifying the information in the report.  For instance, the auditor will have to see system-generated reports are being validated by the users of these reports.  The processing integrity principle will be used heavily in this situation.

This audit will look at how management has chosen the suitable criteria for the control, and how well they’ve measured the subject matter (note that “subject matter” means the risk relevant to entities using the subservice company).  So an auditor will look at whether the risk is something related to the entity’s business with the subservice company, then check the metrics of the current control, and see whether they are actually related to the control.

Migrating My Nextcloud Server to Another Server

When I saw that my version of Ubuntu could not be upgraded any more (due to Digital Ocean’s system), I had to migrate my Nextcloud installation (version 11.0.2) to a newer version of Ubuntu (16.04). In the documentation for Nextcloud, it details how to exactly migrate your Nextcloud installation to another Linux server, so I tried using that. Sadly, though, they left a couple of things out. So here’s my version of how I migrated my Nextcloud installation to another server.

First of all, they say to back up your data, which is what I did (to some extent). Next, I spun up a droplet which had similar specifications to my previous installation (it uses 512MB and has 20GB of storage, for instance). I made sure to put in my usual ssh key for this new droplet, as well as securing it in other ways. At first, I thought I knew how to easily do this: copy over the /var/www/nextcloud directory to the new server (which was also set up for Nextcloud), change some directory names, and it would be done. However, this was not what I found.

When I tried this method, I couldn’t access the web interface. What was worse was that I had forgotten to change the firewall configuration on the new droplet and accidently “locked” myself out. So, through some more research, I tried it again with another droplet.

This time, I copied the files using rsync, and made sure that I used the proper switches (I used the “-a” and “-t” switches, in order to archive the files, as well as save the timestamps). I saved the files to a new directory on the server, and made sure to back up the old files. I thought that this time I had fixed the issue. But Fate can be cruel.

Even though I had copied over the files in the “correct” fashion, the server still wasn’t accessible from the web. Looking at the /var/log/apache2/error.log file, I found that the webserver couldn’t start due to Nextcloud not be able to read the database. After researching the problem more, I learned that the data can’t be just “copied” over; rather the data has to be copied, and the database has to be imported. So, after scrapping that droplet (I had changed it too much already), I spun up a new droplet, and tried this whole thing all over again.

First, I put the server into “maintenance mode” and stopped the Apache server. Then I extracted the data from the database (via the command mysql -u ownCloud -p password ownCloud > /tmp/dump.sql), copied that over to the new server, and imported it into the new database. For importing the new database, I “dropped” the old database, created a new one, and finally imported the data with the command mysql -u nextcloud -p password nextcloud < /tmp/dump/sql. Then I copied the old Nextcloud as I did before (using the official documentation’s recommendation of the command rsync -Aax) and carefully moved the files into the new /var/www/Nextcloud directory. Even through all of this, it still wasn’t enough.

Looking again at the /var/log/apache2/error.log file, I found that the new Nextcloud install couldn’t read the database due to the new server using the old Nextcloud config.php file (are you still with me?). So, I changed a few values in the config.php file so as to use the new database. What I did was change the dbname to the name of the new database, as well as input the new database login information. Also, I added the new IP address to the “trusted hosts” array in the config.php file. This seems to have fixed most of my problems.

Just in case, I also ran a script that I found in the official Nextcloud documentation for fixing the permissions on the new server. Then I changed some of the security configurations for the Apache server. I copied over previous configurations for SSL into the /etc/apache2/sites-available directory, and enabled them through the a2enmode command. With all that finished, I started up the Apache server again, and took the Nextcloud installation out of “maintenance mode”. Finally, I was able to use the server. Except, it wasn’t exactly to my liking.

You see, I had copied over the “data”, “config”, and “themes” directories. I did not copy over all of my previous apps. When I saw that I had only half of my previous apps, I thought, “I need to fix this,” and copied the contents of my previous Nextcloud’s “apps” directory to the new server. With those out of the way, I saw to using some of Nextcloud’s recommendations, and bumped up the memory (as well as putting in a timeout feature). These were harder to fix, partially because I had these problems with my previous Nextcloud installation. Nevertheless, I sought to fix them.

One of my problems was with the /var/www/nextcloud/.htaccess file, specifically that it wasn’t working. To fix this, I edited the /etc/apache2/apache.conf file and changed the AllowOverride directive for the /var/www section to “All”. This allowed the /var/www/nextcloud/.htaccess to work (at least, it would work since I’m accessing the site over a secure connection). Next, I added a memcache so as to speed up performance. I added in php-apcu and edited the /var/www/nextcloud/config/config.php file to reflect this by assigning memcache.local the value \\OC\Memcache\\APCu. My server was made much faster with this tweak. For added polish, I followed the directions on this tutorial and added Redis support.

This was a tough migration that I thought would have been easy. I figured that it would have taken a couple hours. However, it was stretched out to a numbers of hours (not to mention one night). While it’s great that I have migrated to a more manageable configuration, I should have done more research.

Helping Out a Local Charity

I’ve been helping out a local charity with preparing tax returns for the needy and underprivileged for the past few weeks and we’ve run into a problem. Each time we have to print out the tax returns for the clients, we have to take the laptop over to the printer to have it printed out.  This takes a while, and it can be a royal pain.  So I have suggested setting up a small print server so that the laptops on the WLAN can easily print.  My initial set-up looks encouraging: I have set up a Raspberry Pi as a little print server, and have successfully printed from one of the laptops.  With some security measures and other set-up, the other laptops that the tax preparers are using will be able to use this print server, too.

Learning About Setting Controls for I.T. Assets

In my pursuit to get into the information technology (IT) audit field, I must learn about setting controls for securing IT assets, minimizing risk, and eventually testing that said controls work.  In major organizations where information flows constantly and is utilized to advance the organization’s goals, ensuring that the information and knowledge are accurate, intact, timely, and secure are important.  To secure them, though, management must know how this information and knowledge can be lost.  Once they understand this, controls must be put into place so as to prevent this loss.  But management cannot always safeguard these assets.

As a company moves along in its financial year, these controls can break down.  For example, backups can be corrupted (losing information), and employees may leave the company (thus losing knowledge).  So it is also good to reassess whether these controls are working as intended.  This is where the IT auditor steps in, to evaluate these controls, and see to it that that continue to do the job.

Though I know of some ways of testing these controls (e.g. vouching, interviews, and walkthroughs), I have never carried them out.  All I have done is study them.  While studying textbooks is fine, some would say the true teacher is experience, and I have not done so.  For the most part, I have managed a couple of websites (this one included) so that they cannot be hacked.  I have put controls in place to ensure that my website is not compromised.  But to make sure they work, I must turn to someone who has had experience in managing a website.  Not just that, but an IT auditor who will teach me what to exactly do so that this website is not damaged.  Eventually, I would learn more from them so that, when I am on an audit engagement, I can ensure that the company’s valuable assets are kept safe.

MyCroft, AI, and how I’m trying to help it

The other day, I saw that a version of MyCroft was released for the Raspberry Pi.  I have been following MyCroft for a while now (mostly through the Linux Action Show) and have tried using it.  The software is still in beta, so I found some bugs with it, namely I can’t really use it.  I have tried pairing it, then talking into it with my microphone.  But it can’t understand what I’m saying.  At first, I thought it had something to do with getting a secure connection to the MyCroft backend servers.  Now it could be a problem with my microphone.

I’ll admit that my desktop microphone isn’t the best.  But how much clarity does the microphone require?  Apparently, a lot.  The microphones on the Amazon Echo, for instance, can pick up a bunch of channels of sound.  So it looks like I’m going to have to get a better microphone.

What I’ve also seen is that MyCroft uses Google’s backend for the speech recognition.  It looks like they’ll go to something such as Kaldi, but that doesn’t have a large enough speech model to get the job done.  While it has a model based upon over a thousand hours of speech, it may require thousands of more hours of speech just to get better results.  I’ve been donating to Voxforge and trying to help with their speech corpus.  However, they’ve barely got enough for half of their first release.  So I was wondering how to speed things up and get them more samples.

What they could do is make it fun and interesting to donate.  I was thinking of something like a badge system on the Voxforge website, or even leaderboards.  Then again, would this make it fun to donate?  I need to think more on this.

Interesting video on designing programming languages

Yesterday, I started watching this video on programming languages, and it took me over forty minutes to stop watching the video. It’s not because that the video was over an hour long, but rather the subject matter of the video.  It’s a presentation by Brian Kernighan titled “How to succeed in language design without really trying”.  The presentation by professor Kernighan was very well done.  He went through a bit of history with how some programming languages came about, as well as their usees.  He also talked about his time with Bell Labs, and how he, along with two other great programmers, wrote the language awk.  The video had me interested because, for one, I could understand half of what professor Kernighan said, and two, he admitted that he threw the language together out of necessity.  Also, he would, at times, remind the audience of his short comings, such as with functional programming languages, and remembering how to program in C.

Dealing With the Internet of Things

The other day, I attended a meeting of the North Texas chapter of ISACA.  There, the information technology veteran, Austin Hutton, gave a presentation on the dangers of the Internet of Things (IoT).  I have written about the IoT and how it can be used to devastating effect.  One of the problems that Hutton talked about is that there are more IoT devices than there are people on earth.   Thousands are being manufactured and sold each day, and each one of these devices can be hacked to assist in an attack.  And the problem is getting bigger.

Most of those devices were poorly designed, and thus have no way of being updated.  The companies who make these devices have thin profit margins, so they cannot afford to make them secure.  In some cases, the manufacturer buy the chips from other companies, so they are not directly responsible for its security.  The average IoT device can be easily hacked: a number of them have easy to crack passwords, or have flaws that were not detected when they were being designed.  There are even programs which can auto-hack some of these devices.  All the hacker needs to do is learn the make and model of the IoT device, select the program, sit back, and gain control over it.  For those devices which are used as intended, they may be doing something illegal.

Hutton gave the example of a Tempur-Pedic bed which can send the user’s data back to Tempur-Pedic for analysis so as to improve the user’s experience.  He then gave an example of someone else (specifically, his 14-year-old granddaughter) sleeping in the bed, and their data being sent to Tempur-Pedic without their permission.  This can be considered breaking the law because she’s a minor.  How would that situation be resolved?  How can we at least minimize the damage from IoT devices?

For one, education.  Though companies are really selling the convenience of IoT devices, consumers must learn how harmful IoT devices can be.  The public needs to learn that these devices can be used to cause harm to our cities, and possibly to themselves.  Recently, the business of a utilities company in Finland was disrupted due to a DDoS attack, resulting in the heating for their customers being disabled.  What if this was the smart thermostats of many of their customers getting hacked?  The attacker could lower the temperatures in these houses, or disable the thermostat, which would be a dangerous situation to homes in Finland during the winter. How else could these devices be attacked?  An attendant to the meeting, David Hayes of Verizon, had one other scenario.

There are utility companies in North America and Europe that use monitors called SCADAs which can remotely control machines vital to a functioning city (one example is the water pumps which keep drinking water flowing through the city).  What if, Hayes suggested, a hacker takes control of these pumps, and threatens to take them offline, or even increases their work to the point of destroying them, unless he is paid $100,000?  Now we’re starting to see the cost of this problem.  This cost will only increase, as malicious hackers devise ways of misusing these IoT devices.

Another way we can minimize the damage from IoT devices is to ensure that your IoT devices can be modified such that only you can control it.  If you can change the password, do it.  Check that a default root password hasn’t been hardcoded into the device.  If you can, find a device that can be updated (though few IoT devices have the capacity to be updated).  On the government side, we’re going to need some form of  oversight.  For instance, no IoT device bought by  the government can lack the ability to be updated.  How about current IoT devices?  There is little we can do about them.  If we’re dependent on them, then it’s going to be difficult to replace them.  Maybe for the average person it’s easy to change their IoT lightbulbs.  But how can a maintenance manager at a company tell his bosses that, due to the threats these IoT devices have to the security of the company, they all have to be changed.  How much will that cost?

This is a growing problem that will grow more as these hacked IoT devices are used to facilitate these attacks.  It is imperative that this problem be addressed now, rather then have some catastrophe occur, and involve the lives of thousands.

Wondering About Risk Assessments

Since I have little knowledge on audits (only from what I learned in college), I have been reading up on the finer details of an audit.  I came across this documentation on the methods of carrying out a risk assessment in an audit.  The article lists three options for performing a risk assessment (though there are many ways of performing a risk assessment).  One way of doing it is by having an outside consultant coming in, looking at what the company wants to accomplish, analyzes the business processes, and determines what their exact risks are.  Another way of doing it is for a consultant to come in, work with management to identify risk, determine the level of risk to the company, and evaluate the controls in place.  The final way, as detailed in the article, is to have an assessment performed by many employees in the company, identifying the possible risk, ensure the controls are in place, and monitor whether the controls are working.  This, though, is meant for an audit of the financial statements.  While it’s true these methods can be used to audit other parts of the company, these are mainly for ensuring that the financial statements are reasonably correct and free from errors.

So how could I apply this to an I.S. audit?  The first step on an engagement is setting the scope and the objectives of the audit.  Then you move on to the risk assessment.  Where to use these methods will depend upon how the company operates, and on the inherent risk to the company.  If the company is more “top-down”, and things are usually dictated from the top, then perhaps it would be better to have a consultant come in, talk with management to identify risk, and perform more assessments from there.  A problem with this, though, is that you may not get buy-in from the lower employees.  At least, that’s what I can tell from such an approach.

As for other methods, well, I’m going to have to eventually learn those in detail.

The Problems With the Internet of Things

As more and more Internet of Things (IoT) devices are bought and set up, there is a growing concern for what they can do, in addition to their normal purpose.  The security researcher, Brian Krebs, had his website brought down by a Distributed Denial of Service (DDoS) attack.  The company who formerly hosted Krebs and his security, Akamai, said that the attack was brought on by hundreds of hacked IoT devices (he has since started using Google’s protective services).  This didn’t use reflection or replication attacks, either; it used traditional methods of denial of service, by flooding his site for requests.  Akamai says that this is the largest DDoS they have ever seen.  This brings me to the question: how can we prevent and/or mitigate these sort of attacks?

This attack was brought on mainly by unsecured, un-maintained IoT devices.  More recently, these devices have been manufactured, released, and not updated.  The average consumer of these IoT devices know that the features of the device make it such that one can easily control it from afar, often times with one’s mobile phone.  What they do not realize is that hackers can also break into these devices and use them, too.  Often, the manufacturer will throw in a free OS (such as GNU/Linux), add on their thin, proprietary layer, and sell it.  They do not realize the problem they are creating, as exemplified in the attack on Krebs’ website.

It is true that there is a cost to updating and maintaining these devices.  Which company wants to have a costly developer staff just to update the software on their line of light bulbs?  Then again, which company wants to be known for the product which aided in bringing down Google’s servers?  Either way, there’s going to have to be a way for these devices to get updated.

Usually what a user will find on these IoT devices is an embedded OS like GNU/Linux.  So why not develop a distribution that utilizes open standards and receives regular update?  Similar to Android, yet with stricter guidelines.  A company could, for instance, set up a distribution with safety, compatibility, and interoperability in mind.  They could work with the IoT device manufacturers in making products that work together, and can be updated regularly.  Though let’s not just talk about the manufacturers; the consumer also has a responsibility, too.  (It’s worth noting that there is an embedded GNU/Linux distribution that can be easily built and configured for IoT devices.)

The average consumer of IoT devices will have to learn about the extended benefits of these IoT devices, and they must realize that they come with a much greater risk.  Indeed, one cannot put a simple toaster in the same category as a light bulb which one can control with a mobile phone.  They must be made aware that an attacker can take control of their IoT devices and used for malicious purposes.  This doesn’t mean that they need to be scared into acting, though, because actions made in fear are, often times, poor choices.  They should be informed that it’s possible for this to occur, and that there are forces in place which are trying to counter these attacks.

Going forward, companies that make IoT devices, and consumers of IoT devices, must be more safety conscious, for there are malicious forces in the world who are ready and able to make use of these devices for their own nefarious purposes.