All posts tagged OSINT


Let’s take inventory of the information we now have and decide where we will go from here.

Information Inventory

Figure 1 – Information Inventory

Using Modules

The three commands we used (show domains, show contacts, and show companies) will help us to decide which modules to use. The show modules command will display a list of modules to choose from.

show modules
show modules

Figure 2 – show modules

As a quick note for looking at the modules, the “-” delimiter divides the module into, “what you have and what you want”. So your command would look something like this: use I have recon/domains I want hosts/shodan_hostname

use recon/domains - hosts/shodan_hostname
recon-ng to shodan module

Figure 3 – recon-ng to shodan module

The red text indicates that an error occurred when running the module. The green text indicates the new elements added to the database.

shodan summary

Figure 4 – shodan summary

The module added hosts so using the show hosts command will show the additions.  Notice that we also have ports as well.

show hosts
show hosts results

Figure 5 – show hosts results

Notice this command displays the row id, the host, the ip address, and the module that was used.

show ports
show ports results

Figure 6 – show ports results

Remove Unwanted Entries

If we wanted to stay in the .com domain, we need a way to remove the .hk and other domains.

help delete
 help delete results

Figure 7 – help delete results

Remember show ports was the last command we ran so ports was the table we viewed. Running the show ports command again shows that the selected rows were removed ONLY for the ports table. To validate the command worked we will check the table again.

show ports
Cleaned ports table

Figure 8 – Cleaned ports table

The .hk domains are still present in the hosts table.  You will need to remove them from each table.

show hosts
show hosts results

Figure 9 – show hosts results

Exporting Data and Report Generation

Now that we’ve imported data from an outside source, ran several modules inside recon-ng, and we’ve even deleted data from the database, it’s time to create our report.  There are lots of options to choose from. The search reporting command gives us our choices.

search reporting
search reporting results

Figure 10 – search reporting results

The show dashboard command allows us to look at the modules used and the number of times they’ve been ran.  We can also see the amount of information inside the database.

show dashboard
show dashboard results

Figure 11 – show dashboard results

Some of the modules I ran were not in this tutorial.  From Figure 11 you can see all the modules used. Figure 12 is a continuation of the show dashboard command.  Here you can see the information that is captured in the database.  This also makes it easier for creating a report or exporting information.

show dashboard summary

Figure 12 – show dashboard summary

Exporting Data

We will use the reporting/list module to create a list of IP addresses to use in nmap.  This will tie in several things we’ve already covered.

  • Search for modules
  • Show options
  • Schema command
  • Set command

We will also use Nmap to scan for port 80.

search reporting
search reporting

Figure 13 – search reporting

use report/list
show options
report/list options

Figure 14 – report/list options

We will run the show schema and only show the truncated results so we can get the table schema.

show schema
show schema

Figure 15 – show schema

Next, use the set command to give recon-ng the file location.

set FILNAME /location/on/file/system
set file location

Figure 16 – set file location

Finally, run and let recon-ng generate the results. The screenshot is truncated so you can get an idea of what it looks like, your mileage may vary.

Report Results

Figure 17 – Report Results

<<Truncation Occurs>>>

Report Summary

Figure 18 – Report Summary

Using export_iplist.txt as input for our Nmap scan.

  • -iL input list filename
  • -p 80 port to scan
  • -Pn No Ping
nmap -iL export_iplist.txt -Pn -p 80
Nmap port 80 scan

Figure 19 – Nmap port 80 scan

Create Report

This section will show you how to create an HTML report using the same data set.

use reporting/html
show options
set CREATOR Pentester
set COMPANY United Airlines

Figure 20 – report/html

set options for report

Figure 21 – set options for report

We used the set command to add the creator and the customer properties for our report. Use the run command to execute the module.

generate report

Figure 22 – generate report

Not too exciting but we have our report waiting for us in the .recon-ng folder.

Report location

Figure 23 – Report location

Lets look at that file using a browser.

 File Browser

Figure 24 – File Browser

HTML Report Example

Figure 25 – HTML Report Example

The next set of figures will show the expanded results for the Summary, Domains, and Locations sections.

Summary Section

Figure 26 – Summary Section

Domains Section

Figure 27 – Domains Section

Locations Section

Figure 28 – Locations Section

The Contacts section we could have done a more with the information here.  One thing I like to do is us with this information is expand using the  https://pipl.com website. Using Pipl we could really dig into who any of the individuals are to create more effective spear phishing attacks or sales calls. Who are we kidding? We don’t do sales calls.

Contacts Section

Figure 29 – Contacts Section

Look through the Vulnerabilities section. We haven’t even started a technical vulnerability assessment and we already have a place to start. OSINT for the win!

Vulnerabilities Section

Figure 30 – Vulnerabilities Section

Vulnerabilities Section 2

Figure 31 – Vulnerabilities Section 2


In this tutorial we covered Recon-ng.  It can be found at https://bitbucket.org/LaNMaSteR53/recon-ng.  I really enjoy working with this tool.  Just playing with it can give you a better understanding of other ways to gather information about your target.  It really becomes about bread crumbs. How deep can you dig into a company, email address, or person?

Areas we covered:

  • Installation
  • Adding API Keys
  • Creating a Workspace
  • Importing information into the database “ Grep and Awk commands”
  • Using Modules
  • Removing unwanted entries
  • Exporting Data “ to use with nmap”
  • Creating Reports

For quite some time fierce was my go to DNS testing tool, we even wrote a post on it, and I still use it extensively but recently I have been using dmitry in parallel. dmitry is the Deepmagic Information Gathering Tool and while it doesn’t have the subdomain brute force functionality that I love in fierce it automates other functions that I never realized I was tired of doing manually.

Why do we spend so much time on DNS? A companies DNS server is a gold mine of information during a penetration test. This is especially true if organizations that have an improperly configured split view DNS where internal records are exposed externally. It is also pretty common to find test, development, or integration servers that have been exposed in DNS and then forgotten about. Why spend all of your penetration testing efforts on the fully patched and hardened server when the test server from 2006 is available. DNS helps find the path of least resistance. Once again we will be using the hackerone directory to demonstrate this tool on real world systems. For dmitry I chose marktplaats.nl.

root@kali:~# dmitry -h
Deepmagic Information Gathering Tool
"There be some deep magic going on"

dmitry: invalid option -- 'h'
Usage: dmitry [-winsepfb] [-t 0-9] [-o %host.txt] host
  -o     Save output to %host.txt or to file specified by -o file
  -i     Perform a whois lookup on the IP address of a host
  -w     Perform a whois lookup on the domain name of a host
  -n     Retrieve Netcraft.com information on a host
  -s     Perform a search for possible subdomains
  -e     Perform a search for possible email addresses
  -p     Perform a TCP port scan on a host
* -f     Perform a TCP port scan on a host showing output reporting filtered ports
* -b     Read in the banner received from the scanned port
* -t 0-9 Set the TTL in seconds when scanning a TCP port ( Default 2 )
*Requires the -p flagged to be passed

We are going to simply step through the options with a brief description and talk about what information would be useful during different phases of a pen test.

-i and -w perform a whois lookup in a slightly different way. We will combine them together to get an IP address and a whois lookup at the same time. For our primer the domain name would probably be the preferred start point but if you only had an IP address to start with the -i option would get the same results. I’ve used dmitry to track down the source of a brute force attack on an internet facing system. It wasn’t very amazing it was a compromised host in a medium size company.

Whois Lookup

Figure 1 – Whois Lookup

I’ve redacted the screenshot since whois reports are fairly extensive. But we know that the IP address we are working on is and it is part of the subnet. We also know that it is part of RIPE the regional internet registry that includes Europe. Which makes since because they are out of the Netherlands.

Next we will get the netcraft.com information for the domain. Netcraft is an internet security firm out of the UK which does anti-spam and anti-phishing work. We learn two things from the -n option. First, they are reputable enough to not have been reported for spam/phishing and second, that the IP address for the system changed so they probably have a load balancer or are hosting from multiple locations. This also makes sense.

Netcraft Report

Figure 2 – Netcraft Report

Lets use the -s option to look for subdomains. This isn’t super interesting but at least it has a few that we could look at if we were testing the entire domain.

Subdomain Search

Figure 3 – Subdomain Search

The -e option is useful for starting phishing attacks or feeding information into recon-ng, which we also have a tutorial on (hint, hint). This specific test was super anti-climatic but you get the idea.

Email Search

Figure 4 – Email Search

dmitry also has a built in port scanner, I like the banner enumeration function so I normally stack the -b option onto the -p port scan option.

Port Scan

Figure 5 – Port Scan

Wow, that is a lot of step. Why wouldn’t you just stack all those into one and write it out to a file instead? Because then how would this whole primer be longer than one page? The last screenshot is how I actually use it and I read out of the file instead of off the screen.

Combined Testing

Figure 6 – Combined Testing

marktplaats.nl was a boring choice for this tool but with luck whatever domain you point it at will be fruitful.

Creating a Workspace

The workspace is an area that will help keep your reconnaissance organized.  Each workspace has it’s own directory inside the hidden .recon-ng directory in the home directory.

First we will find an organization to recon and build our workspace around this company.  We will use HackerOne to get our company.

This is how Wikipedia describes HackerOne:

HackerOne is a vulnerability coordination and bug bounty platform that connects businesses with cybersecurity researchers (aka, hackers). It is one of the first companies to embrace and utilize crowd-sourced security and hackers as linchpins of its business model, and is the largest cybersecurity firm of its kind.[1]”

Even though we are only performing reconnaissance in a non-intrusive manner, we will use a company from HackerOne’s Directory.  Under the right conditions, this company has agreed to recon and scanning.  We will only be using recon-ng. Figure 1 shows the company we will use in the tutorial but feel free to select a different company from HackerOne or use any one that you are authorized to test against.


Figure 1: HackerOne Company

Figures 2 and 3 show the scope that is authorized for testing including eligible submissions and domains.

Eligible Items

Figure 2: Eligible Items

Allowed Domains

Figure 3: Allowed Domains

workspaces -h shows us the different option we have a available to use (Figure 4).

Workspace -h

Figure 4: Workspace -h

Next we will add our workspace using the following command (Figure 5)

workspaces add
Adding a Workspace

Figure 5: Adding a Workspace

After this command you are automatically placed into your new workspace. workspaces list will show you the status of your workspaces.

List of workspaces

Figure 6: List of workspaces

Next, we will add our company and our domain.  This will add information to the SQLite database. To add information into the database, we need to understand the schema, the layout of the tables. To look at the schema of the database run the following command (Figure 7)

show schema
Show Schema

Figure 7: Show Schema

There are thirteen different tables, we will view the schema of the tables we use in this tutorial.

 add companies

Running the add companies command will make the other columns available.   Press enter if you want to leave that column blank.

Add Company

Figure 8: Add Company

Add the domain using the following command (Figure 9).

add domains
Add Domains

Figure 9: Add Domains

To verify that the domain was added successfully run the command shown in Figure 10.

show domains
List Domains

Figure 10: List Domains

A simple way of thinking about adding to the tables is shown in the next Figure 11.

Table Visualization

Figure 11: Table Visualization

Now that we’ve added data to the database and know how to ensure that data was manually inserted correctly lets move on to importing and exporting data.

Importing Data into the Database

We will uses theHarvester to gather information about United.com and import this into recon-ng’s database.

From Edge Security  http://www.edge-security.com/theharvester.php

“The objective of this program is to gather emails, subdomains, hosts, employee names, open ports and banners from different public sources like search engines, PGP key servers and SHODAN computer database.

This tool is intended to help Penetration Testers in the early stages of the penetration test in order to understand the customer footprint on the Internet. It is also useful for anyone that wants to know what an attacker can see about their organization.”

If theHarvester isn’t already installed, i.e. you aren’t using Kali Linux, you can clone it from here: https://github.com/laramies/theHarvester


Figure 12: theHarvester

We called theHarvester to gather data on domain united.com using all the data sources listed in the help screen.  We directed the output to my recon-ng folder using the ‘>’ operator. The sample command we used follows:

./theHarvester.py -d united.com -b all > ~/recon-ng/harvester.txt

The file name is harvester.txt. This is an ugly file that well will parse through using a few linux utilities. Sample results are shown in the next figure.

Sample Results

Figure 13: Sample Results

The next step is to make this snippet and clean it up a bit with some Linux utilities. We will use grep and AWK to trim the tree.

Grep and AWK

grep is a command-line utility for searching plain-text data sets for lines matching a regular expression.

This is by no means the perfect way.  This is just one of many to get the results you need. Using grep, we will create a list of email addresses from harvester.txt file. (Figure 14)

grep @united.com harvester.txt > united_emails.txt
grep command

Figure 14: grep command

If you are interested in the file contents use the cat command to view file the contents in the terminal

cat united_emails.txt
cat results

Figure 15: cat results

Next, we will create a list of hosts for import from theHarvester results. (Figure 16)

grep ":" harvester.txt
grep host

Figure 16: grep host

Grep will also help create the virtual host list.  Also take note that since “united.com” is the only domain in scope, it becomes part of the command.

grep ":" harvester.txt | grep united.com

The pattern that we wanted to match was “=” and I didn’t want to count the lines after the pattern so I chose to use 200 as my line count after the pattern, as shown in Figure 17.

grep for Virtual Hosts

Figure 17 grep for Virtual Hosts

This command was a little harder to figure out. The pattern that we wanted to match was “=” and I didn’t want to count the lines after the pattern so I chose to use 200 as my line count after the pattern.

grep -A200 "=" harvester.txt | grep united.com > virtual_hosts.txt

It is time to import our information into recon-ng.

Using the show modules command, we get a list of modules broken down by categories. We will use import/list module from the Import category.

show modules
Import Modules

Figure 18: Import Modules

The “show info” command shows the options to use and the table and columns that will be needed for the import.

show info
Show Info

Figure 19: Show Info

To find the column and table, we will use the “show schema” command.  This will give use a list of the Tables and the different columns in each.

show schema
Show Schema

Figure 20: Show Schema

To import email addresses, we will the  “contacts” table and the email column. Our file name will be the united_email.txt file we created using theHarvester. The “set” statement, sets the variables for the import. The “run” command executes the module.

set TABLE contacts
set COLUMN email
set FILENAME united_emails.txt
Email Import

Figure 21: Email Import

The “show contacts” command show the data inside the “Contacts Table”. This is a second verification that the data imported correctly.

Show Contacts

Figure 22: Show Contacts

Part 3: Usage and Reporting


Recon-ng is a Open Source Reconnaissance framework written in Python.  This SQLite database driven tool incorporates Python modules and API Keys to allows itself to be a conduit for many tools ranging from The Harvester to Metasploit.  It is an awesome standalone reconnaissance tool in its own right. As a side note we all totally have a geeky nerd crush on LaNMaSterR53.

This part of the series will take a look at installation, adding API Keys. Later we will show you how to create a Workspace, importing data into the database, and export data for the use with other tools.

For our targets of reconnaissance, we will use HackerOne’s directory of companies.  This is not our way of saying, “Go out and hack these companies” but our way of doing safe recon and provide continuous screenshots.  That will be easy to follow.  This is also our way of introducing you to HackerOne and the Bug Bounty community if you are not already familiar with it.

Getting Started

While most penetration testers will be running this out of Kali Linux the prerequisites (git and pip) may need to be installed before you start. Fortunately, this is easy on most linux flavors and requires just a few simple commands:

sudo apt-get update
sudo apt-get install git
sudo apt-get install python-pip python-dev build-essential
sudo pip install --upgrade pip
sudo pip install --upgrade virtualenv

Next clone Recon-ng from bitbucket (Figure 1). In this tutorial we clone to the Home directory but feel free to use whatever directory structure works for you.

git clone https://LaNMaSteR53@bitbucket.org/LaNMaSteR53/recon-ng.git
git install

Figure 1: git install

Next, change directory into the newly created recon-ng and list the contents (Figure 2).

cd recon-ng
recon-ng contents

Figure 2: recon-ng contents

We will use the REQUIREMENTS file to finish installing the dependencies for recon-ng.

pip install -r REQUIREMENTS

At this point the installation is almost ready to use, we will go over a little bit of information now while you’re still paying attention and then get recon-ng running and the API keys loaded.

The installation of recon-ng also created a .recon-ng a hidden directory inside your home directory.  This directory is empty.  This is where your key.db and your workspaces will be created. After logging into recon-ng for the first time, a directory and the keys.db is entered in the hidden .recon-ng directory (Figure 3).

.recon-ng directory

Figure 3: .recon-ng directory

To run recon-ng, go to the folder where you ran the “git clone” command. This is where the magic happens.

cd recon-ng 

Don’t worry if you get the “_api key not set error” (Figure 4).  We have not added any API keys yet.

Initial Start

Figure 4: Initial Start

From our screen, we can see that there are 76 Recon modules, 8 Reporting modules, 2 Import modules, 2 Exploitation modules, and 2 Discovery modules.  We are also using the “default” workspace. (Figure 5)

Recon-ng start screen

Figure 5: Recon-ng start screen

Close recon-ng and lets look at the modules and the underlying code. (Figure 6)

cd modules
cd recon
Module Directory

Figure 6: Module Directory

If we go inside the module directory and inside a module, we can see the Python script that does all the magic. (Figure 7)

Module Content

Figure 7: Module Content

Adding API Keys

As I said in the introduction, this is a database driven tool.  Now it’s time to add information into the database.

The API keys are used by the modules to gather information for the SQLite database.  Some of the API keys are free but some can be expensive.  I will keep this tutorial to the free API keys that are available.

After going back into the recon-ng directory and typing “./recon-ng”, you will be inside the recon-ng console. (Figure 8)

keys list
Keys List

Figure 8: Keys List

The following command is an example of adding the shodan_api key. (Bottom of Figure 8, Look close it is there)

keys add shodan_api <paste key here>

API Keys Signup URLs

Signing up for the API keys is the least fun and most time consuming part of the setup. Showing each signup would be lethally boring so here are the list of URLs. All links open in a new window because we are thoughtful like that.

Google API – https://console.developers.google.com/apis/library
Bing API – https://msdn.microsoft.com/en-us/library/bing-ads-getting-started.aspx
Facebook API – https://developers.facebook.com/docs/apis-and-sdks
Instragram API – https://www.programmableweb.com/api/instagram
Linkedin API – https://developer.linkedin.com/docs/rest-api
Shodan API – https://developer.shodan.io/
Twitter API – https://apps.twitter.com/

Part 2: Workspaces and Importing Data

Geographic Information Theory

There are two main types of geographic information found in files. Geotagging is the information placed in a file with the GPS coordinates of the location. EXIF (Exchangable Image File Format) contains the geotagging information as well as device type and speed. EXIF contains more information and is normally limited by the capabilities of the device creating the file.

What are the common weaknesses? Data leakage from the geographic information can pin point the exact location of where a file created. This information can be used to find detailed maps using software such as Google Earth or create detailed patterns of movement.

What are you trying to do? We are going to connect to Twitter and do geolocation on the @FIFAWorldCup  account.  Why the FIFAworldcup account? We know where the world cup is happening so it is easy to see if the information is correct.

 Getting Started

Get creepy from here: http://ilektrojohn.github.io/creepy/

Ready to Go

For this tutorial it is installed in a Windows 7 virtual machine. The Kali apt-get repositories was not the latest version when this was written. Besides, the OS is just a tool we don’t need to get caught up in an ideological battle about how somebody has to use a certain tool to be a ‘real’ hacker. Being effective is more important than being a zealot.

Edit the configuration: Edit -> Plugins Configuration then select Twitter Plugin -> Run Configuration Wizard -> Next. Enter your Twitter ID and password to authorize creepy by clicking Authorize APP.

Creepy Twitter Authorization Screen

Authorize Creepy

Wouldn’t this also be a great time to follow us @SecureNM? I’m not trying to make you feel guilty but you are here reading our stuff. Copy the PIN that Twitter generates into the text box at the bottom of the window and click the finish button.

Creepy Twitter Plugin  Configuration Complete

Creepy Twitter Plugin Configuration

Creepy should now be authorized but just to be sure select Twitter Plugin and then click the Test Plugin Configuration button. Yay, we are ready to get started. Click OK a few times to get back to the main screen.

Twitter Plugin Success

Twitter Plugin Success

From the file menu select Creepy -> New Project -> Person Based Project. This will start the project wizard. Fill in the information as you see fit.

Creepy Project Configuration

Project Configuration

Add the information and select the proper plugin then select Search. In this case we used @FIFAWorldCup.

Creepy Search Results

Search Results

Click the ID or IDs that you want to creep on, see what I did there? Then select Add to Targets. I added all of the IDs that were found to ensure data for this tutorial.

Select Next -> Next -> Finish.

Analyze the project by selecting the project and clicking the Analyze button

There are many analyze buttons like it but this one is mine

Sao Paulo, Rio de Janeiro, and the Maldives are all among the locations of texts sent by the twitter IDs that creepy analyzed. Select one of these locations on the map and through the power of google and GPS you can see the location and possibly a street view.  In the immortal words of Keanu Reeves, Whoa!

Full Map of Tweets

Full Map of Tweets

Location of Maldives Tweet

Location of Maldives Tweet

I know what you’re thinking, wow that was cool but so what. So what you say? This is how you would use it on a real life security engagement. You get a black box test with nothing but a URL. You find the companies twitter account on the website. Feeding this information into creepy gives you locations that are potential targets for social engineering, physical infiltration, and WiFi attacks. See how just a little information can turn the tide in an assessment?

Get Foca

Download Link: http://www.informatica64/foca.aspx enter an email in the textbox at the bottom of the page labeled “Cuenta de correo”. A download link will be sent out; this link will expire.

Install from downloaded package.

Now What?

What are you trying to do?

  1. Create Project
    1. Click Project a New project
    2. Enter the Project name, Domain name and Alternative domains. The domain and alternative domains will be used during later analysis. Alternative domains such as ftp.website.com or secondarywebsite.com are used by Foca during later analysis.
    3. Be aware that on a website with a large number of documents disk space can be consumed quickly. Take this into account when selecting the Folder where save documents location.
    4. Click the Create button

Figure 1: New Project Creation

  1. Save the .FOCA file. This file contains the project definitions and information that will be found later with the exception of documents which will be stored in the location provided earlier.
  2. Network and Domains Information
    1. Select the Network Icon. The buttons will have black text when active and the text will be grey when deselected. By default all searches are selected.
    2. Click Start à This can take a significant amount of time because of the dictionary search option. All of the names or IP’s added when the project was created as domain website or alternative domains will be searched.
    3. Select the Domains icon. Information gathered about the domain will be in the center pane.


Figure 2: Domain Information

  1. When the Domains icon is selected three main options can be selected to gather more information about the selected targets and to be used later to gather metadata.
  2. Clicking the Technology Recognition button will identify the web server type in use (Apache, IIS, ColdFusion, etc.). See Figure 2 for example.
  3. The Crawling button uses Google and Bing search engines to list the known files and folders.


Figure 3: Search Engine Crawling

  1. The Files button crawls the target using Google, Bing and Exalead to find documents of the selected type. These files form the basis of the metadata analysis in Section 4.
  2. Vulnerability Enumeration
    1. Currently there are no examples available of vulnerabilities found using Foca to populate this section.
    2.  Document Enumeration and Metadata Analysis
      1. Select the Metadata icon. The files found in section 2g will be listed to ensure a complete list select the Select all button.
      2. Download files for analysis. Select a subset of files and select Download from the right click menu. Alternatively right click any file and select Download All.
      3. Once all files are downloaded right click any file and select Extract All Metadata from the menu. When this completes select Analyze Metadata from the right click menu. This will populate information into the other sections of Foca.
      4. Examples of Metadata Usage
        1. The names found in the Users section create a list of accounts to be brute forced accounts.
        2. The Folders and Printers section provide names of internal systems that can be targeted if a foothold is found externally.
        3. The Software, Emails, and Operating Systems sections are the most useful when combined in a spear phishing attack. Knowing the specific operating system and software in use allows a very targeted exploit to be created. Combine this exploit with the specific information in a document from the user tied to the email address and a very effective phishing campaign should be possible.