Tuesday, January 21, 2014

It's Time For Optimistic InfoSec


I'll wager that you rarely come across "Information Security" placed in close proximity to the term "Optimistic". In fact,  it often seems that these terms are almost magnetically charged to repel one another. While you might see articles about enthusiasm over security budget increases or the effectiveness of some security technology, we rarely witness public optimistic proclamations or "high-five" celebrations in the Information Security community of practice.


Some of The Reasons For This Include


1. Information Security News Is Invariably Bad
The big InfoSec news stories are (always) bad. The development and stories from 2013 were certainly no exception.

2. IS Professionals Get Paid To Think Negatively 
A large part of the job in InfoSec is necessarily anchored to a certain patterns of negative thought.  We have to make it a habit to consider the worse-case scenarios, how to break things, and ways to subvert good intentions.  We are at our best "constructively negative".  This isn't a bad thing; It is actually one of our great strengths if we can avoid certain pitfalls. (more below).

3. Pessimism Often Feels "Safe"
In a time when there is so much focus on what isn't working right, it takes a good bit of professional courage to go against the grain.  If you're optimistic/positive about something and things work out well then that is one thing.  If however you've expressed optimism and something goes wrong, then we tend to view this as a type of failed professional prognostication.

Reclaiming Optimism

Of course, Infosec must remain"constructively negative" in terms of evaluating risks;  however, we also have to make sure that the inertia of this habit along with the barrage of negative news doesn't bleed over into how we view the professional mission of Information Security.  When you tune into to some of the InfoSec echo-chambers, you often hear a great deal of frustration laced generously with sarcasm on just how bad things are or what someone did wrong.  It's understandable that everyone occasionally needs to vent; however at time period, when Information Security has become a central concern for individuals, businesses, and governments alike, we also need to project attitudinal leadership through constructive expressions of what we are doing right, what we are able to improve, and most importantly how we will continue to cultivate realistic balances of risks and opportunities in cyberspace.

                                    
Critically, this is not the same as saying that we need to put on rose-colored glasses to just make us feel better. Things are tough; All of our favorite asymmetries are still in play: rate of complexity increase vs accurate risk modeling , offensive vs defensive investment thresholds, threat adaptation vs defensive evolution.

However, the kind of optimism that we need however is one that acknowledges these challenges but doesn't hide behind them.  This type of attitude represents a forward-looking stance that purposefully seeks opportunities to recognize and support the good things we've done, actively encourages live-wire enthusiasm to seize new opportunities/innovations, and maintains the requisite tenacity needed to stay in the fight to make things better.

Four Very Important Reasons For IS Optimism

If the news from 2013 left you feeling a bit down, here
are four significant reasons for considerable optimism:

1. We Win Every Day
For every new story and issue that we encounter, we prevent, detect, and deter truly vast amounts of attacks and proactively find and fix a large number of issues.  You can tend to begin to see this as "background noise"; however, if it wasn't for good security efforts and practices this "background noise" would be the den of catastrophe.  We are doing good work every day to protect the commonweal.

2. We Are In It Together
There is much more collaboration and information sharing occurring in and among varied Infosec communities.  There is a plethora of material of reference and training material that folks have freely shared. There are also so many folks willing to help one another through free exchange of ideas and lessons learned.

3. We are Innovating Like Crazy 
The amount of security innovation is at an all time high; If you look at the projects, free tools, and products on the market it is amazing how many great ideas are out there.  This innovation is not just confined to software either; We also have the capacity to innovate new defensive methods,  assessment processes, and services to contextualize the way the do security above and beyond mere compliance.

4. We will not Surrender
The folks that I respect in this field all have stories of rough days/weeks/months, but they have never quit or walked away.  Even rough spots are opportunities to learn, adapt, and come back stronger. Security issues may not be going away anytime soon, but on a positive note neither are those who are truly committed to make things better. Continual learning and persistence -- The greatest defensive weapons in our arsenal.

Ideas/FeedBack?
What things are you optimistic about in Information Security? What are we doing really well? What opportunities do you see on the horizon in 2014?

Monday, December 23, 2013

Building A Cheap Personal VPN


Introduction
This year we've seen individual concerns regarding data privacy expand dramatically. While public interest in this topic has increased, day-to-day computing practices still haven't changed a great deal.  Many old habits still persist that often put our personal information at risk. One prime example of this is the use of shared untrusted wireless connections. Individuals often grow accustomed to indiscriminately connecting to available wireless networks with little foreknowledge of the identity, trustworthiness, or goals of the operators of these services. While it is no surprise that anyone would wish to take advantage of "free"connections, by placing our traffic on untrusted shared networks, we open ourselves up to a number of privacy and security issues including:
DefCon Wall of Sheep
  • Traffic Interception/Redirection - When joining an untrusted network, there is a real risk that malicious individuals may intercept your traffic or redirect your requests to mock-up sites meant to capture your credentials. Even if you join a wireless networks secured with a static preshared-key (i.e. at a conference), you should importantly not misperceive this as a significant security measure. Other individuals with access to this key can relatively easily sniff and decrypt traffic
  • Traffic Analysis / Privacy - When you join an untrusted network, you may not be aware of the privacy practices relevant to this connection. What kind of logging is going via this network? Even when your web-traffic is encrypted, are your DNS queries being logged for analysis? What information are you giving away about yourself without your awareness? ( An interesting story from earlier this year, revealed that that even just leaving your mobile WIFI turned on can be used to track your movements and shopping habits in stores. )
  • Traffic Filtering and Restrictions - Do you have unfettered access to information and sites from the location you are connecting from?  Are you restricted to particular kinds of Internet applications on this link? 
These types of concerns have spurred the increasing growth and popularity of Commercial Personal VPN Services. For less than $20 per month, these providers off you the ability to encrypt and tunnel all your internet traffic.   The merit of these services is that it  raises the bar significantly for prying eyes as well as gives you greater control over your online "point of presence" -- the location where your traffic is decrypted and routed to the Internet at large (see diagram below).  Whereas in the past VPN services were usually only employed by organizations to provide secure remote access to internal resources, it is now feasible for individuals to also employ a personal VPN to enhance security and privacy of their individual network traffic.


Another factor driving the adoption of personal VPNs is the DIY community and low cost methods for deploying services using cloud computing resources.  It is possible and often less expensive to set up your own low-cost VPN using OpenVPN and Amazon EC2. For those with the time, interest, and inclination to test out their own personal vpn, the steps below provide an outline of a basic build.

*Important Note: These instructions are intended for personal usage on untrusted networks only.  For business or organizational systems, you should consult with your IT group to determine what VPN services may be available and approved for authorized use.  Using a non-approved VPN within certain networks may be considered a violation of policy as well as an organizational security issue.

Technical Instructions

Step #1: Creating An Amazon EC2 Instance 
For this build, I will use Ubuntu Server 12.04 LTS running on an Amazon EC2 micro instances that is eligible for free utilization.

If you've  never used EC2 before you will definitely need to familiarize yourself with this platform.

Amazon has some good getting started guides here (high recommended):
http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/GetStartedLinux.html

A good youtube video can also be found here:

The basic steps that we need to take with bring this Ubuntu Instance up is the following:
  • Select Ubuntu Server 12.04 LTS x64
  • Use Micro Tier (t1.micro, 613MB) for test setup. Eligible for free usage tier.
  • Save and Backup Your Key Pair (PEM file). Don't lose this file! You will need to access your EC2 instance.
  • Create A Customized Security Group that allows inbound access to SSH (TCP 22) and our custom OpenVPN port (UDP 443).









After your instance has started, you will need to access it using SSH and the Key file you saved.
chmod 400 example.pem
ssh -i example.pem [email protected]


Patches and Software Installs
Once the instance has booted, we need to  perform some software updates and installs.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openvpn -y
sudo apt-get install dnsmasq -y
sudo aptget install easy-rsa -y



Step #2: Setting Up A Certificate Authority + Generating Keys
OpenVPN supports two secure modes of operation, one employs a pre-shared static key (PSK) and another is based on SSL/TLS using RSA certificates and keys.  The PSK method has the benefit of simplicity, however it is not the most secure method (if anyone intercepts this key then all traffic could potentially be decrypted). For this reason, we will use the SSL/TLS method.

Copying Configuration Files
First off, we will want to copy the OpenVPN example files to obtain the scripts we'll need to establish a local certificate authority.
sudo mkdir /etc/openvpn/easy-rsa/
sudo cp -R /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa/ 
sudo ln -s /etc/openvpn/easy-rsa/openssl-1.0.0.cnf /etc/openvpn/easy-rsa/openssl.cnf

Setting Up Variables.
Now we will want to set some initial variables that will allow easy-rsa key management scripts to function.
sudo vi /etc/openvpn/easy-rsa/vars

Some of the variables that you will want to set and change to establish the CA include include the following:
export KEY_SIZE=2048
export KEY_COUNTRY="US"
export KEY_PROVINCE="YourProvince"
export KEY_CITY="YourCity"
export KEY_ORG="YourORG"
export KEY_EMAIL="[email protected]"
export [email protected]
export KEY_CN=changeme
export KEY_NAME=changeme
export KEY_OU=changeme
export PKCS11_MODULE_PATH=changeme 
export KEY_CONFIG=/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Note that we are using a 2048 bit key for additional paranoia.

Generating the master CA and key (as root)
cd /etc/openvpn/easy-rsa/
source vars
./clean-all
./build-ca

Diffie Hellman parameters  generated for the OpenVPN server (as root)
build-dh

Generating Server Certificate
build-key-server myservername

Copying certificates and keys to /etc/openvpn/
cd /etc/openvpn/easy-rsa/keys/
cp ca.crt myservername.key myservername.crt dh2048.pem /etc/openvpn/

Generating Client Key-Pairs
./build-key client1
./build-key client2


At the end of this step you should now have several files residing in /etc/openvpn. Here is a break-down on what these files are.


Step #3: Creating OpenVPN Server Config
Here is a somewhat standard server config.  This would be stored in /etc/openvpn/server.conf  

port 443
proto udp
dev tun
ca ca.crt
cert myservername.crt
dh dh2048.pem
server 10.8.0.0 255.255.255.0ls
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 10.8.0.1"
keepalive 10 120
comp-lzo
persist-key
persist-tun
status openvpn-status.log
verb 3
Note the push directives.  These route all traffic through our VPN server and also change the DNS settings for the client upon connection (moving DNS handling to VPN server).

Step #4: Enabling NAT Forwarding
To route Internet traffic for connecting clients we'll need to set up a basic NAT firewall config. We'll do it manually first and then drop some rules in /etc/rc.local for quick/dirty persistence.

sudo sysctl -w net.ipv4.ip_forward=1

#OPENVPN Forwarding

iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT
iptables -A FORWARD -j REJECT
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE


Step #5: DNSMasq Setup
We will set up DNSMasq to localize DNS request handling and also provide some acceleration (via caching).

/etc/dnsmasq.conf
listen-address=127.0.0.1,10.8.0.1
bind-interfaces

Step #6: Setting RC.LOCAL Boot Options
Some quick and dirty lines in /etc/rc.local to bring NAT up and make sure that dnsmasq is running.
#OPENVPN Forwarding
sysctl -w net.ipv4.ip_forward=1
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT
iptables -A FORWARD -j REJECT
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

#START DNSMASQ

/etc/init.d/dnsmasq start



Step #7: Client Setup 

Archiving + Downloading Client Key-Pairs
To setup our client, we will need the CA certificate, client certificate, client public key, a openvpn client configuration, and an openvpn client.  First we can tarball the client information we need and then download this via sftp.
cd /etc/openvpn/easy-rsa/keys/ 
tar cvzf ~ubuntu/client1.tgz ca.crt client1.crt client1.key

Basic Client Configuration
In addition to download this tar file, we will also need to set up a basic client config like the one below.
client
dev tun
proto udp
remote ec2-example.compute-1.amazonaws.com 443
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client1.crt
key client1.key
ns-cert-type server
comp-lzo
verb 3


Configuring Client Software
In general to configure a client,  you will want to extract all the files from the tarball you downloaded and then copy all of these files along with the client configuration (see above) into one common folder.  The last step is to import or load the client configuration file. Note that occasionally some clients will look for a file with a ovpn extension for import.  This is simply a flat text configuration file (same as above).
OpenVPN Connect on Android

Keep in mind if you are adding new clients, that you will need to create new keypairs (see step #7).

Some popular OpenVPN client software includes:

OpenVPN GUI for Windows

TunnelBlick OpenVPNGUI for OSX

OpenVPN Connect for Android

OpenVPN Connect for IOS

Troubleshooting: Most client software will give you a status indicator concerning whether your VPN tunnel is established.  However you can also test this by pinging the remote tunnel interface on the OpenVPN server at 10.8.0.1.


Thursday, September 12, 2013

10 Useful Firefox Plugins For Pen-Testing

Weaponizing Your Web Browser  
An ordinary web-browser is already in many ways an extremely versatile security tool.  However with the addition of just a few select plugins, you can also easily configure your browser to provide a application security assessment platform.

While there are a large number of Firefox plugins that have utility for security assessments, there is also a great deal of feature overlap between several of these projects.    For a more comprehensive list of Firefox pentest plugins you can my plugin collection listed here:
DefendLink - Appsec Addons Collection

Here are 10 plugins that are extremely useful and provide unique functionality for application pen-testing (compatible with FF version 23.0 and above):

#1

HACKBAR




Developer

Johan Adriaans, Pedro Laguna
If you have some experience with web-application security testing, then Hackbar is definitely one of the most useful plugins.  It automates many of the repetitive tasks involved in manually testing sites for flaws like XSS and SQLi.
#2 
TAMPERDATA


Developer

Tamperdata allows you to directly view and modify HTTP/HTTPS headers and post parameters. It's amazing how many web app are still vulnerable to this and rely on client-side input validation.
#3
FIREBUG



Developer

Firebug is an extremely versatile tool and well documented tool.  While the emphasis of the tool is debugging, it also has utility for penetration testing due to the ability to quickly dissect the structure of page as well as directly modify page elements.

#
WAPPALYZER

Developer
Elbert Alias
Wapplazyer allows for the detection of web application components including  CMS software, CDN, operating systems, and web server revisions.
#5 

Developer

XSSME allows for scanning web forms for common cross-site-scripting reflection attacks (non-persistent only).
#6 
SQL Inject Me


Developer
In similar vein to XSSME, SecurityCompass’s other plugin allows for testing of common SQLi injection flaws right from the browser.
#7 
PASSIVERECON


Developer
Passive recon provides a number of quick-fire shortcuts for performing
standard profiling of a web-site and its online content in a convenient manner. 

It's launched from within the context-menu of the browser. The "Show All" option does a quick info dump on the site.


#8 
FOXYPROXY


Developer
FoxyProxy is a Firefox extension which automatically switches an internet connection across one or more proxy servers based on URL patterns. (Handy for toggling between interception proxies like ZAP, Burpesuite, etc).
#9
COOKIES  
MANAGER+


Cookies Manager+ provides an easy way to view, edit and create new cookies. 
It also shows extra information about cookies, allows edit multiple cookies at once and backup/restore them.

#10 
AGENT SWITCHER



Developer



The User Agent Switcher extension adds a menu and a toolbar button to alter the user agent of a browser.  This plugin includes common user agents for mobile platforms, and web spiders as well.

Conclusion

You might ask with so many feature-rich web application scanners on the market why even bother with browser extensions?  The simple answer is that the true application security assessments should never rely solely on scan results  but instead take the time and effort required to validate vulnerabilities as well uncover the many issues that scanners often will not detect. The  plugins listed above provide functionality that can accelerate this manual review and validation efforts. 

Ideas/Further Reading?


(Great set of problems for practicing your exploit skills).

What plugins do you find most useful for pen-testing? Do you have any experience using chrome extensions for web application assessments?   I'd love to hear your thoughts.






Thursday, August 15, 2013

Modeling IR Program Maturity

If you ask IT managers about improving something, you're very likely to get some kind of response that is grounded in the notion of process maturity.  One of the most common ways of considering process maturity at a high-level is the Capability Maturity Model Integration model (CMMI) developed by Carnegie Melon University.

CMMI models often contain five levels of process maturity ranging from ad-hoc processes (heroics) to processes that are highly optimized (continual improvement).

It is interesting to consider how Incident Response maturity levels might be expressed using a CMMI perspective and what type of differentiating processes might be found at each level of development.  In a recent talk, I offered my own take on IR maturity and capability levels (see diagram below).

This model takes into account two core capabilities that are critical to IR success:
  • Threat Awareness - Our ability to have accurate and reliable information concerning the presence of threat actors, their intentions, their historical activities, and how our defenses relate to all of the aforementioned.
  • Agility - Our ability to quickly and sufficiently isolate, eradicate, and return the business to normal operations.
By relating these two attributes to common and/or emerging IR program states, we can map out roughly five stages of maturity and capability:

Levels/Stages

Level 1 Reactive / Adhoc
This is the "nuke-from-orbit" approach that, unfortunately, too many organizations still employ when they discover a compromised asset.   By re-imaging or restoring the system from backups, it is possible to get back to business very quickly (high agility), but no real knowledge is gained of how the system was hacked, why it was hacked, and what it was used for once compromised (threat awareness).

Level 2 Tool Driven / Signature Based
At this phase, organizations adopt automated tools that look for potential compromises in the environment.  These are often signature driven tools (AV, IDS, etc) that provide some automated alerts of potential compromises.  Remediation of these compromised systems is also driven by tools sometimes in an effort to "clean" a system of compromise (which is incidentally not a good idea).

Level 3 Process Driven
At this phase, organizations have adopted internal formal IR roles, processes, and governance structures.  For many organizations, this is the ideal state of operations where attacks are detected, analyzed, and addressed in a cost-effective and repeatable manner.  The only deficiency with this model is that dealing with targeted attacks requires more than just good processes. 
Important Papers/Documentation: NIST 800-61

Level 4 Intelligence Driven
For many large organizations, intelligence-driven IR is the now the goal due to the prevalence of APT risks. This IR level requires having a more detailed and up-to-date understanding of threat actors including their objectives, motivation, and their TTP profile (tools, tactics, procedures).  This knowledge of adversarial disposition is then used to architect security defenses and detective controls in a manner that allows for discrete actions to be taken to disrupt, degrade, and deny the ability of an adversary to reach their objectives.
Important Papers/Documentation: Intelligence Driven Computer Network Defense (Lockheed Martin)

Level 5 Predictive Defense
Predictive defense is still an area that is very new.  Terms like "active defense" seem to also be used describe this level of operations but cause a great deal of confusion.  At its heart this approach involves convergence of IR processes and adaptive defensive architecture that can be used to "waylay" adversaries when they enter, operate, and move within protected environments.   I suspect one of the key characteristics of this model will ultimately be the ability to develop capabilities that allow deception and denial operations.

For an idea of what this might look like, check out this presentation by MITRE researchers:

Active Cyber Network Defense with Denial 


Finding The Right Level
It is important for us to consider our IR program maturity and capabilities in relation the threats that we are most likely to face and the scope of impact these threats can create.

If you are a SMB, then it probably doesn't make a great deal of  financial sense to go beyond a level 3 state of preparedness (having a maintained plan, concrete roles/responsibilities, lines of communication, established response procedures).  Getting to this point is in fact a great deal of work for many organizations and allows for cost-effective management of the lion's share of security incidents related to "drive-by attacks".

However if your organization maintains valuable intellectual property or has a highly recognized brand, then you've probably already realize that just having a formal IR plan and processes is not sufficient to deal with the risk of targeted intrusions.  For these risks, we have to begin to think more in terms of chess than checkers.  A great place to start thinking about some of these issues is the seminal paper Intelligence Driven Computer Network Defense.

Ideas/Further Reading?
What are your ideas about how IR process maturity and capabilities can be logically grouped?  Is a five stage model sufficient?  I'd love to hear your thoughts.

Active Defense Harbinger Distribution (ADHD) - ( Linux distro that SANS uses in their active defense classes.)

How Lockheed Martin's 'Kill Chain' Stopped SecurID Attack  (Short article on Kill Chain Framework)


Diagram APT Life-Cycle