Wednesday, November 14, 2012

How Ditching Cable Led To Boxee TV

Since moving into our new house, we decided to disconnect cable and go to strictly Internet-only content.  We have been cable free for 4 months now and it has been a pretty easy transition.  To replace cable initially our setup was a HDTV Indoor / Outdoor Antenna and a Apple TV with subscription services to Netflix & Hulu.
Boxee TV

The biggest complaint I had about the antenna was not knowing what content was playing on the channels provided as well as always having to scan for new channels due to how the antenna is currently positioned (I plan on running a antenna to my roof probably in the spring).  After some research, I came across Boxee TV as a solution.  Even though this product is in its infancy, I am really pleased so far on having a clean interface for antenna based television.  Another great feature of this product is the option of watching live TV on the go for no extra fees.  The Boxee TV provides a free web application that can be used on any device.  This is nice for watching sporting events and local news on the go without downloading any extra applications.  The DVR functionality is seamless, but I will probably not pay the $10 per month after my free three month subscription ends.

As for the applications, Boxee TV needs to add more content which the Roku provides by default such as Amazon Video On-Demand and HBO GO.  I understand that the Boxee TV was just released, so I will be patient on the supported applications.  For all streaming subscriptions, we will be using Apple TV just because the applications are more polished.  Due to how "Apple-centric" my house is, I am able to use Airplay Mirroring for all other content online.  So I can be patient with Boxee TV in the meantime.

So if anyone wants to ditch cable, this $99 product is worth the investment.  The Boxee TV is a great set-top box that provides everything for people looking to ditch cable; even if you do have cable, you can still use the Boxee TV as a Live TV + no limits DVR set-top box.

I applaud Boxee Incorporated for providing consumers an alternative to cable television and trying to battle the cable companies.  Boxee's battle is going to be fierce and I am happy to buy their products even if some basic applications are missing during initial release.  I feel very passionate about giving the consumer multiple options and currently living in a city that is dominated by Xfinity/Comcast, I like having an alternative to traditional cable.  Boxee TV is being sold at Walmart, so you can check it out prior to ordering online.

Tuesday, October 16, 2012

Brute Forcing Web Applications

Here is a quick tutorial on how to discover and brute force web applications with Nikto and DirBuster.  Please be ethical when using these tools and be certain to have the proper authorization from the owner of the website prior to running these penetration tools.

All these penetration techniques were performed on BackTrack Linux 5 R3, so make sure to grab the appropriate iso image.

I created a quick script to perform the footprinting part of this test, so please feel free to use.
#!/bin/bash

nikto_dir=/pentest/web/nikto/
targets=~/targets # update this file with all the IPs that are needed (e.g. 192.168.1.1), line by line

cd ${nikto_dir}

# Updates Nikto with latest plugins and databases.
if ( ping -c 1 127.0.0.1 > /dev/null 2>&1 ); then
    echo ""
    echo "Updating Nikto"
    /usr/bin/perl nikto.pl -update
    echo ""
fi

echo ""
echo "Processing Nikto output for targets:"        
while read line
do
    /usr/local/bin/nmap ${line} -oG - | /usr/bin/perl nikto.pl -h ${line} -p 80,443 -F csv -o ~/nikto_${line}.csv
done < ${targets}

After you collect the web scanner information, perform the following to load DirBuster.
root@bt:~# cd pentest/web/dirbuster
root@bt:/pentest/web/dirbuster# java -jar DirBuster-0.12.jar -u http://localhost

Once DirBuster starts, you will be prompted with a interface that resembles the following:



To start your brute forcing test, enter in the "Target URL" field the appropriate website you wish to test (e.g. http://127.0.0.1:80).  Depending on the level of "force" you want to brute force your URL, click the browse button for the field entitled "File with list of dirs/files" and for this example choose "directory-list-lowercase-2.3-medium.txt."  You can leave all the other settings alone, the default settings will be fine for this example.  Once you are setup, click start and begin the brute force testing.

If you want to perform a DirBuster scan from terminal, perform the following:

root@bt:/pentest/web/dirbuster# java -jar DirBuster-0.12.jar -H -l /pentest/web/dirbuster/directory-list-2.3-medium.txt -s / -u http://127.0.0.1

DirBuster will output the results into /pentest/web/dirbuster when complete.

Thursday, October 11, 2012

BackTrack Linux 5 R3 Firefox/Flash Installation

I ran into some issues with the default installation of Firefox 14 on BackTrack Linux 5 R3, so here are the steps to upgrade to the latest Mozilla Firefox and Adobe Flash client plugin. The reason for this post is in regards to running Nessus v4.4.1 in BackTrack Linux. Nessus needs the Flash client plugin installed to run in a web browser; fortunately Nessus will be releasing a Nessus HTML5 client soon and we won't have to run into these issues in the future.

Kill all instances of Firefox by either closing your browser or running the pkill command.
# ps -elf | grep -i firefox

Make a new directory to download Firefox and the Flash client plugins in /tmp. Change directory to the newly created directory (e.g. /tmp/firefox).
# mkdir -p /tmp/firefox  
# cd /tmp/firefox

Remove the following files:
# rm -rf /opt/firefox/*  
# rm -rf /usr/lib/mozilla/plugins/*  
# rm -f /usr/share/icons/mozicon128.png

Download the following files:
Latest releases of Firefox &amp; Flash as of 10/11/2012:  
# wget http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/latest/linux-x86_64/en-US/firefox-16.0.tar.bz2  
# wget http://fpdownload.macromedia.com/get/flashplayer/pdc/11.2.202.238/install_flash_player_11_linux.x86_64.tar.gz

Extract the following files by running a TAR command:
Latest releases of Firefox &amp; Flash as of 10/11/2012:  
 # tar -xvf firefox-16.0.tar.bz2  
 # tar -xvf install_flash_player_11_linux.x86_64.tar.gz

Copy over the newly untarred files to finish installation:
# cp -R firefox/* /opt/firefox  
# cp libflashplayer.so /usr/lib/mozilla/plugins/  
# mkdir -p ~/.mozilla/plugins  
# cp libflashplayer.so ~/.mozilla/plugins/

To start a new instance of Firefox, perform the following:
# /opt/firefox/firefox &

Once Firefox successfully starts, point your browser to about:plugins and confirm that the plugin is enabled.

Monday, October 8, 2012

NASA insignia's

During my commute this morning, I was listening to The Nerdist episode with Tom Hanks and learned a fact in regards to the NASA insignia's nicknames.  Maybe this trivia will help me win a free beer?!

The Meatball (1959–82 and 1992–present)


The Worm (1975–1992)

Wednesday, August 29, 2012

Linux Screen Information

I ran into some processing dilemma's at work today and came across the command called screen.  After reading Jeff Huckaby's blog post over at rackaid.com, I decided to publish some information that was useful.  "Linux Screen Can Save you from that Lost Connection."  Some material is copied from Jeff Huckaby's post, so please feel free to head over to the original source provided in the link above.

Environment for this post

I am mainly working with screen in either Cygwin or Backtrack 5 R2, so if you do not have screen available for your distribution, find a repository which houses the application and download/install.

What is screen?

The screen man page states that "Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive  shells).   Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429  (ECMA  48,  ANSI X3.64)  and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets)."

Basically screen offers the user multiple windows to manage terminal sessions and saves your processes during initialization.

Using screen

To initialize screen, perform the following:
[ Wed Aug 29 10:06 @ ~ ]$ screen

Depending on how screen is setup, the user may be prompted with a text message, for example:


You are now inside of a window within screen. This functions just like a normal shell except for a few special characters. Screen uses the command  "<CTRL>-a"  as a signal to send commands to screen instead of the shell. To get help, just use  "<CTRL>-a"  then “?”. You should now have the screen help page.

Multiple windowsTo open a new window, use the following keyboard sequence  "<CTRL>-a"  then "c". This will create a new window for you with your default prompt.  You can create several windows and toggle through them with  "<CTRL>-a" then "n" for the next window or  "<CTRL>-a" then "p" for the previous window. Each process will keep running while your work elsewhere.Leaving or DetachingThere are two ways to get out of screen. The first is just like logging out of a shell. You kill the window with  "<CTRL>-a"  then "k" or “exit” will work on some systems. 
When using the "<CTRL>-a" then "k" command, the user will be prompted with a option to "Really kill" the window.
The second way to leave screen is to detach from a window. This method leaves the process running and simply closes the window. If you have really long processes, you need to close your SSH program, you can detach from the window using  "<CTRL>-a"  then  "d". This will drop you into your shell. All screen windows are still there and you can re-attach to them later.AttachingTo continue with the process that you were last running in screen, perform the following to list the current detached sessions:

To create a new screen with the name screen_test, use:
[ Wed Aug 29 10:06 @ ~ ]$ screen -S screen_test

To reattach to a session, perform the following:



Tuesday, May 1, 2012

Solaris OS Software Groups

Software groups are collections of Solaris OS software packages. Each software group includes support for different functions and hardware drivers. The Solaris OS is made up of six software groups:

  • Reduced Networking Support software group
  • Core System Support software group
  • End User Solaris software group
  • Developer Solaris software group
  • Entire Solaris software group
  • Entire Solaris software group plus Original Equipment Manufacturers (OEM) support 

Reduced Network Support Software Group (SUNWCrnet)

This group contains the minimum software that is required to boot and run a Solaris system with limited network service support. The Reduced Networking software group provides a multiuser text-based console and system administration utilities. This software group also enables the system to recognize network interfaces, but does not activate network services.

A system installed with the Reduced Networking software group could, for example, be used as a thin-client host in a network.

Core Software Group (SUNWCreq)

The Core software group contains the minimum software required to boot and run the Solaris OS in a minimum configuration, without the support to run many server applications. The Core software group includes a minimum of networking software, including Telnet, File Transfer Protocol (FTP), Network File System (NFS), Network Information Service (NIS) clients, and Domain Name Service (DNS). This software group also includes the drivers required to run the Common Desktop Environment (CDE) but does not include the CDE software. The Core software group also does not include online manual pages.

End User System Support Software Group (SUNWCuser)

The End User System Support software group contains the Core software group and also contains the recommended software for an end user plus the CDE.

Developer System Support Software Group (SUNWCprog)

The Developer System Support software group contains the End User System Support software group. It also contains the libraries, the include files, the online manual pages, and the programming tools for developing software.

Entire Distribution Software Group (SUNWCall)

The Entire Distribution software group contains the Developer System Support software group. It also contains additional software needed for servers. The software that is in the Entire Distribution software group is the entire Solaris OS software release minus OEM support.

Entire Distribution Plus OEM Support Software Group (SUNWCXall)

The Entire Distribution Plus OEM Support software group contains the entire Solaris OS software release. It also contains additional hardware support for OEMs and hardware not on the system at the time of installation. This software group is recommended when you are installing the Solaris OS software on non-Sun servers that use UltraSPARC processors.

To view the names of the cluster configurations, perform the command:

# grep METACLUSTER /var/sadm/system/admin/.clustertoc
METACLUSTER=SUNWCXall
METACLUSTER=SUNWCall
METACLUSTER=SUNWCprog
METACLUSTER=SUNWCuser
METACLUSTER=SUNWCreq
METACLUSTER=SUNWCrnet
METACLUSTER=SUNWCmreq

Note: The metacluster SUNWCmreq is a hidden metacluster. It allows you to create a minimal core metacluster by de-selecting packages from the core metacluster.

To determine which cluster configuration has been installed on the system, perform the command:

# cat /var/sadm/system/admin/CLUSTER
CLUSTER=SUNWCXall

Friday, April 6, 2012

Fundamentals of Solaris Package Administration

The /var/sadm/install/contents file is a complete record of all the software packages installed on the local system disk. It references every file and directory belonging to every software package and shows the configuration of each product installed. To list the contents of the /var/sadm/install/contents file, perform the command:

# more /var/sadm/install/contents
(output edited for brevity)
/bin=./usr/bin s none SUNWcsr
/dev d none 0755 root sys SUNWcsr SUNWcsd
/dev/allkmem=../devices/pseudo/mm@0:allkmem s none SUNWcsd
/dev/arp=../devices/pseudo/arp@0:arp s none SUNWcsd
/etc/ftpd/ftpusers e ftpusers 0644 root sys 198 16387 1094222536 SUNWftpr
/etc/passwd e passwd 0644 root sys 580 48298 1094222123 SUNWcsr

The pkgadd command updates the /var/sadm/install/contents file each time new packages are installed.

The pkgrm command uses the /var/sadm/install/contents file to determine where the files for a software package are located on the system. When a package is removed from the system, the pkgrm command updates the /var/sadm/install/contents file.

To determine if a particular file was installed on the system disk and to find the directory in which it is located, use the pkgchk command with either the full or partial path name of the command you want to report on. For example, to verify that the showrev command is installed on the system disk, perform the command:

# pkgchk -l -P showrev
Pathname: /usr/bin/showrev
Type: regular file
Expected mode: 0755
Expected owner: root
Expected group: sys
Expected file size (bytes): 29980
Expected sum(1) of contents: 57864
Expected last modification: Dec 14 06:17:58 AM 2004
Referenced by the following packages:
        SUNWadmc      
Current status: installed

Pathname: /usr/share/man/man1m/showrev.1m
Type: regular file
Expected mode: 0644
Expected owner: root
Expected group: root
Expected file size (bytes): 3507
Expected sum(1) of contents: 35841
Expected last modification: Dec 10 10:42:54 PM 2004
Referenced by the following packages:
        SUNWman       
Current status: installed

Friday, March 2, 2012

Installing a bin/cue Converter for OSX

I ran into some issues when trying to open a bin/cue file under OSX the other day and came across BinChunker.  This tool was an easy command line application which helped me convert bin/cue files to an ISO file.  Here are the steps that need to be done ....


  1. Download the latest BinChunker tar.gz file
  2. Open up your terminal program under /Applications/Utilities/Terminal.app
  3. Change directory to where you downloaded the BinChunker file.  For this example do the following:
    # cd ~/Downloads/
  4. To extract the file, perform the following:
    # gzcat bchunk-1.2.0.tar.gz | tar -xvf -
  5. To get the application running, perform the following:
    # cd ./bchunk-1.2.0
    # make
    # cp -p bchunk /usr/local/bin/
    # cp -p bchunk.1 /usr/share/man/man1/
  6. To convert the bin/cue to an ISO file, perform the following:
    # cd ~/Downloads/iso-test
    # bchunk test.bin test.cue test.iso
That is it!  Now just mount the ISO with Disk Utility and you are ready to run your ISO.  Good Luck.

Sunday, February 12, 2012

C|EH Notes: Top Security Challenges

Essential Terminologies:
  • Hack Value: It is the notion among hackers that something is worth doing or is interesting.
  • Target of Evaluation: An IT system, product, or component that is identified/subjected to a required security evaluation.
  • Attack: An assault on the system security derived from an intelligent threat.  An attack is any action violating security.
  • Threat: an action or event that might compromise security.  A threat is a potential violation of security.
Security Challenges:
  • Compliance to government laws and regulations.
  • Evolution of technology focused on ease of use.
  • Increased number of network-based applications.
  • Increasing complexity of computer infrastructure administration and management.
  • It is difficult to centralize security in a distributed computing environment.
  • Direct impact of security breach on corporate asset base and goodwill.
Top Security Challenges:
  1. Increase in sophisticated cyber criminals.
  2. Data leakage, malicious insiders, and remote workers.
  3. Mobile security, adaptive authentication, and social media strategies.
  4. Cyber security workforce.
  5. Exploited vulnerabilities, operationalizing security.
  6. Critical infrastructure protection.
  7. Balancing sharing with privacy requirements.
  8. Identity access strategies and lifecycle.
List of Security Risks:
  1. Trojans/Info Stealing/Keyloggers
  2. Fast Flux Botnets
  3. Data Loss/Breaches
  4. Internal Threats
  5. Organized Cyber Crime
  6. Phishing/Social Engineering
  7. New emerging viruses
  8. Cyber Espionage
  9. Zero-Day Exploits
  10. Web 2.0 Threats
  11. Phishing attacks
  12. Identity black market
  13. Cyber-extortion
  14. Transportable data (USB, laptops, backup tapes)
  15. "Zombie" networks
  16. Exploits in new technology
  17. Outsourcing projects
  18. Social networking
  19. Business interruption
  20. Virtualization and cloud Computing
Application Security Attacks:
  • Phishing
  • Session hijacking
  • Man-in-the-middle attack
  • The Web Parameter Tampering attack - is based on the manipulation of parameters exchanged between client and server in order to modify application data, such as user credentials and permissions, price and quantity of products.
  • Directory traversal attacks - the goal of this attack is to order an application to access a computer file that is not intended to be accessible. This attack exploits a lack of security (the software is acting exactly as it is supposed to) as opposed to exploiting a bug in the code.  Also known as the ../ (dot dot slash) attack.
    • Canonicalization (c14n) - a process for converting data that has more than one possible representation into a "standard", "normal", or canonical form.
Vulnerability Research Websites:

C|EH Notes: Regional Internet Registries (RIR)

Active and Passive Reconnaissance
Notes from the CERT Software Engineering Institute (SEI) lectures for the Certified Ethical Hacker (C|EH) certificate.

Regional Internet Registries:
  • African Network Information Center (AfriNIC)
  • Asia Pacific Network Information Center (APNIC)
  • American Registry for Internet Numbers (ARIN)
  • Latin America and Caribbean Network Information Centre (LACNIC)
  • RĂ©seaux IP EuropĂ©ens Network Coordination Centre (RIPE NCC)
Top Level Domain Registries
InterNIC - Public Information Regarding Internet Domain Name Registration Services

DNS Enumeration:

# Get Service-oriented architecture record (SOA) and display all nslookup default parameters.
MBP:~ dafinga$ nslookup -all -type=SOA google.com

Set options:
  novc nodebug nod2
  search recurse
  timeout = 0 retry = 3 port = 53
  querytype = A       class = IN
  srchlist = 
Server: 10.0.0.1
Address: 10.0.0.1#53


Non-authoritative answer:
google.com
origin = ns1.google.com
mail addr = dns-admin.google.com
serial = 2012020700
refresh = 7200
retry = 1800
expire = 1209600
minimum = 300


Authoritative answers can be found from:

Friday, February 10, 2012

Modifying Network Parameters in Solaris 10

My reference: The Center for Internet Security (Solaris 10 Benchmarks v4.0).  To get the SMF service to run correctly, do the following:

mkdir -m 755 /var/svc/method
chown root:sys /var/svc/method
cd /var/svc/method

cat > cis_netconfig.sh << END
#!/sbin/sh
#IPv4 source route forwarding is disabled
ndd -set /dev/ip ip_forward_src_routed 0
#IPv6 source route forwarding is disabled
ndd -set /dev/ip ip6_forward_src_routed 0
#Reverse source routed packets are disabled
ndd -set /dev/tcp tcp_rev_src_routes 0
#Forwarding broadcasts are disabled
ndd -set /dev/ip ip_forward_directed_broadcasts 0
#Unestablished tcp connection queue are disabled
ndd -set /dev/tcp tcp_conn_req_max_q0 4096
#Established tcp connection queue are disabled
ndd -set /dev/tcp tcp_conn_req_max_q 1024
#Respond to ICMP timestamp request are disabled
ndd -set /dev/ip ip_respond_to_timestamp 0
#Respond to ICMP broadcast timestamp request is disabled
ndd -set /dev/ip ip_respond_to_timestamp_broadcast 0
#Respond to ICMP netmask request is disabled
ndd -set /dev/ip ip_respond_to_address_mask_broadcast 0
#Respond to ICMP echo broadcast is disabled
ndd -set /dev/ip ip_respond_to_echo_broadcast 0
#The ARP cache cleanup interval is disabled
ndd -set /dev/arp arp_cleanup_interval 60000
#The ARP IRE scan rate is set to 60000 (milliseconds "1 min")
ndd -set /dev/ip ip_ire_arp_interval 60000
#The IPv4 ICMP redirect is disabled
ndd -set /dev/ip ip_ignore_redirect 1
#The IPv6 ICMP redirect is disabled
ndd -set /dev/ip ip6_ignore_redirect 1
#Extended TCP reserved ports is set to port 6112
ndd -set /dev/tcp tcp_extra_priv_ports_add 6112
#IPv4 strict multihoming system drops any packets that appear to originate from a network attached to another interface
ndd -set /dev/ip ip_strict_dst_multihoming 1
#IPv6 strict multihoming system drops any packets that appear to originate from a network attached to another interface
ndd -set /dev/ip ip6_strict_dst_multihoming 1
#ICMPv4 redirects are disabled
ndd -set /dev/ip ip_send_redirects 0
#ICMPv6 redirects are enabled
ndd -set /dev/ip ip6_send_redirects 1
END

chown root:sys ./*
chmod 555 ./*

Now create the service manifest for /var/svc/method/cis_netconfig.sh

cat > cis_netconfig.xml << END
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM
"/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle type='manifest' name='CIS:cis_netconfig'>

<service
  name='site/cis_netconfig'
  type='service'
  version='1'>

  <create_default_instance enabled='true' />

  <single_instance />

  <dependency
    name='usr'
    type='service'
    grouping='require_all'
    restart_on='none'>
    <service_fmri value='svc:/system/filesystem/minimal' />
  </dependency>

<!-- Run ndd commands after network/physical is plumbed. -->
  <dependency
    name='network-physical'
    grouping='require_all'
    restart_on='none'
    type='service'>
    <service_fmri value='svc:/network/physical' />
  </dependency>

<!-- but run the commands before network/initial -->
  <dependent
    name='ndd_network-initial'
    grouping='optional_all'
    restart_on='none'>
    <service_fmri value='svc:/network/initial' />
  </dependent>

  <exec_method
    type='method'
    name='start'
    exec='/var/svc/method/cis_netconfig.sh'
    timeout_seconds='60' />

  <exec_method
    type='method'
    name='stop'
    exec=':true'
    timeout_seconds='60' />

  <property_group name='startd' type='framework'>
    <propval name='duration' type='astring' value='transient' />
  </property_group>

  <stability value='Unstable' />

  <template>
    <common_name>
      <loctext xml:lang='C'>
          CIS Network Parameter Set
      </loctext>
    </common_name>
  </template>
</service>

</service_bundle>
END

Now it is time to import the SMF service, by performing the following: svccfg import cis_netconfig.xml.

When the system is rebooted, the cis_netconfig.sh script will be executed and the appropriate network parameters will be updated. Store the file in /var/svc/manifest/site if it has to be re-imported into the system at a later date.

Note that we are creating a new script that will be executed at boot time to reconfigure various network parameters.

The file cis_netconfig.xml is an SMF manifest for the cis_netconfig service. Once imported into the SMF database, the cis_netconfig.sh will run on every system reboot to set the network parameters appropriately.

If this hinders functionality, disable this service by perform the following: svcadm -v disable svc:/site/cis_netconfig:default

Wednesday, February 8, 2012

Solaris Basic Security Mode (BSM) Auditing

This whole post was copied from The Blog of Ben Rockwood, I just wanted to make another copy of the site in case it gets removed.  Thanks Ben, your post was extremely helpful in my knowledge of Solaris auditing.

You trust your users right? Me neither. Ever watched top/psrinfo or repeatedly use the w command to attempt to watch users on your system? We all have at some point. There are a lot of ways to see what users are doing, from checking their .bash_history to using DTrace (see 'shellsnoop'). But this is all kids stuff. There are times you not only want to watch your users but you really need to know what their doing. If you want to know exactly what people are doing on your systems, I'm glad to say there is a solution: Solaris Auditing, otherwise known as the Solaris Basic Security Module (BSM). Its a valued part of Trusted Solaris and available to you in Solaris 10 out of the box (and Nevada Solaris Express).

When auditing is enabled the system will log as much detail about whats occuring on the system as you wish. You can see everything from logins and logouts to executions to process creation to file access. If it happens, you can see it. When an action occurs that you want to capture the action is recorded and placed into an "audit trace file". Later, these trace files can be used to provide all sorts of details about system activity. The possibilities are nearly endless.

You'll find the auditing configuration files in /etc/security/. In that directory are several config files for auditing, some scripts for enabling or disabling auditing, as well as some RBAC related files that we're not worried about here. In particular, you'll see the following files:
  • audit_class: Defines classes of audit events that are packed together for easy use.
  • audit_control: Primary audit configuration file
  • audit_data: Datafile maintained by auditd, don't edit this file. 
  • audit_event: Defines all the auditable events 
  • audit_record_attr: Defines the record format for various types of events 
  • audit_startup: Script for starting BSM/Auditing services 
  • audit_user: Define user specific audit flags (from audit_class) 
  • audit_warn: Auditd warning notification script 
  • bsmconv: Script for enabling BSM/Auditing 
  • bsmunconv: Script for removing BSM/Auditing
That list might look like a lot, but its really not. If you want to setup the level of auditing for all users you'll edit audit_control, and if you want to use diffrent levels for each user individually, you'll edit audit_user, other than that the rest of the files are just reference or setup.

To get started on your auditing journey, you need to execute the /etc/security/bsmconv script as root. Once you run this you'll need to reboot so shut down most things before your run the script and when it completes do the reboot. The docs suggest you do this in single-user mode, but unless the system is in production I wouldn't worry about it. Following the reboot, check the auditing service svcs -a auditd, if its not enabled you can do so with svcadm enable auditd. You'll see the "auditd" process running via ps -ef.

When defining what events you want to record, you'll want to look at the classes defined in audit_class, and to better understand those classes you'll want to look at audit_event. Classes are represented by 2 letters, for instance "fr" represents "File Read". You can string together these classes to get just the sort of records you want.

In the /etc/security/audit_control configuration file there are two lines in particular that your interested in: flags and naflags. The "flags" line specifies the default event classes to audit. These flags can be over-riden by specifying flags for a user by name in audit_user. The "naflags" specifies event classes to audit only when the event can not be attributed to a specific user (naflags meaning "non-attributable flags"). You can modify the flags you specify with a + or -, where +flag means only record the event on success and -flag means only record the event on failure. A good example might be if you only want to audit file writes that fail or only logins that succeed.

Some interesting audit classes include (see the list in audit_class:
  • fr: file read
  • fw: file write
  • nt: network
  • lo: login or logout
  • fc: file create
  • fd: file delete
  • xx: All X events
Thats only a taste. There is also an "all" meta-class. The all class is fun to play with but creates a huge amount of data! On my home workstation with just one user (me) that was largely idle for 24 hours it dumped 2.2GB of data into the audit trails. Playing with "all" is great to see what you can do but I don't recommend using it beyond just fiddling. After editing the config files make sure that you start the audit service: svcadm restart auditd or with audit -s.

The following is an example audit_control (refer to the audit_control and audit_user man pages for more details):

dir:/var/audit
minfree:20
flags:lo,am,+fd,ss,-fa
naflags:lo,am


Naturally, storing all this data isn't easy. Events are recorded into an "audit trail", by default these audit trails are stored in /var/audit, although for security purposes its recommended that they be places on a remote mount point (more on this later). Audit trails can get really really large based on how much your auditing so be careful to watch them closely for a day or two following enabling it. In order to speed things up, trails are written in binary, which means that you need to "close" them before moving them around and you can't just cat or tail the trails. The currently active audit trail will have "not_terminated" in its file name. Before doing analysis you should "close" the trail, which can be done by stopping auditd or by using auditreduce -O and then removing the not_termianted trail.

In order to read the binary audit trails we can use praudit. This tool can output audit trails in raw format (-r), short form (-s) with one line per record (-l), or in XML form (-x). The short form is useful for digging through manually, while the XML form is handy if you want to use an XML processor to convert it into another more interesting and presentable format. I recommend XML form for those new to audit trails because all the output data is enclosed in tags that describe what your looking at which is handy for learning.

Another useful tool is auditreduce, which can merge and select audit records from audit trail files. This is handy when you want to consolidate one or more audit trails or for moving audit trails around. Its handy if you want to, say, move audit trails off to another system on a regular basis instead storing the trails directly on NFS this would be the tool for you, or even if you wanted to take audit trails from multiple effected systems (such as during a breakin) and create a single audit trail to search through.

In Solaris10 a really kool feature was added to BSM: plugins! Using a plugin (shared lib) you can now output audit events directly to Syslog! This means that you don't need to leave audit trails on disk, NFS or local, but can send them, as they happen, off to a secure system. To enable it just add a line such as the following to your /etc/security/audit_control:
 
plugin: name=audit_syslog.so;p_flags=lo,ss,am

Note that the flags specified on this line don't define what gets audited. The flags are there to define what events should be passed via Syslog, effectively allowing you to filter certain flags to be syslog'ed even if you use other flags elsewhere. Please see Martin Englund's blog for more info, or read the audit_syslog man page.

Once you've enabled the syslog plugin for BSM, redirect the Syslog to a remote server with a line like this in /etc/syslog.conf:

audit.notice     @logdump.cuddletech.com

Auditing provides you with a lot of useful capabilities. Security, of course, is the key, but it goes further than that. Ever get tired of a user saying "I don't know what I did, but..."? If you used auditing you could look back through the users execs() and see what they had done. Curious if users are snooping through files? You could look through the audit trail at all file reads on a certain file.

You can even use auditing for troubleshooting! As an example, when I went to the MySQL Users Confrence I took my dev workstation with me for demos. At home I use an LDAP server so when I booted the system at the show it got really angry that LDAP wasn't present so I disabled the LDAP client service. When I got home I forgot about it and didn't re-enable LDAP although most of the user accounts are still in local files so the system was uneffected and I didn't notice. Because I didn't remove ldap from the nsswitch.conf, the system still was constantly trying LDAP lookups despite not having a client to preform the lookups on its behalf and I never realized it untill I was setting up auditng for this blog entry and realized that I kept seeing a lot of failed read requests to the Solaris Door of the name service... d0h! I'm not saying auditing is your first choice in troubleshooting tools, but it sure came in handy for me that time. The point simply is, auditing isn't just about security, it has a wide range of uses.

Please realize that by the nature of auditing your writting data to disk (or NFS) every time your system preforms and event that it needs to audit. It doesn't take a genius to realize that this is going to effect performance negatively. In my test its not generally a significant hit, but if you are going to consider using Solaris Auditing in production limit your audited events as much as possible and carefully monitor server and application performance for a day or two following the time that you enabled it.

Auditing can seem difficult. I avoided it for years because it seemed so hard... but it really isn't. The config files have like a whopping 4 lines, so if you've never tried out auditing, give it a whirl and see if its for you.

Thursday, February 2, 2012

Formatting and Command Line tips

These commands were performed on a Solaris 10 i386 & SPARC system.

A quick method to monitor CPU intensive processes:
# ps -ef | egrep -v "STIME|$LOGNAME" | sort +3 -r | head -n 15

Formatting USB Devices to create a Unix File System (UFS) on Solaris 10

Make sure that your USB drive is not plugged into the Solaris system.
# svcadm disable volfs

Plug in your USB drive, the data provided is just for example.
# rmformat –l

Looking for devices …
1. Logical Node: /dev/rdsk/c0t0d0p0
Physical Node: /pci@0,0/pci17aa,20ab@1d,7 /storage@2/disk@0,0
Connected Device: Seagate    10EAVS External 1.75
Device Type: Removable

Run fdisk on the “Volmgt Node” for your device
# fdisk /dev/rdsk/c0t0d0p0

Delete any existing partitions and create a new partition with the SOLARIS2 option.  Make sure to choose option 5 to update disk configuration and exit.

Perform the following command to create your UFS file system.
# newfs /dev/rdsk/c0t0d0s2

FAT32 Creation on Solaris 10

Make sure that your USB drive is not plugged into the Solaris system.
# svcadm disable volfs

Plug in your USB drive, the data provided is just for example.
# rmformat –l

Looking for devices …
2. Logical Node: /dev/rdsk/c0t0d0p0
Physical Node: /pci@0,0/pci17aa,20ab@1d,7 /storage@2/disk@0,0
Connected Device: Seagate    10EAVS External 1.75
Device Type: Removable

Run fdisk on the “Volmgt Node” for your device
# fdisk /dev/rdsk/c0t0d0p0

Delete any existing partitions and create a new partition with the FAT32 option.  Make sure to choose option 5 to update disk configuration and exit.

Perform the following command to create your FAT32 file system.
# mkfs –F pcfs –o b=SEAGATE,fat=32 /dev/rdsk/c0t0d0p0:c
(the “b” is a labelname … useful if you want to label USB sticks)

To mount the newly created USB drive, perform the following:
# rmformat -l
# mount –F pcfs /dev/dsk/<USB DRIVE>:c /mnt
*Note* <USB DRIVE> could look like c0t0d0p0 depending on USB connection.

How to perform a flash archive split in Solaris


Make sure you have enough space on your hard drive prior to performing this action.  When you perform the FLAR split command, change directory into a location that has at least 8-12GBs available.
# cd /export/FLAR
# cp /export/install/FLARFILES/<image>.flar .
# flar split <image>.flar
# mkdir –m 755 ./image
# cd image
# cat ../archive | uncompress | cpio –id
... 20 minutes later

The FLAR image is now uncompressed and can be viewed in the image directory.  The image directory will have a complete snapshot of the Solaris restore hierarchy.

Wednesday, February 1, 2012

How to Identify Solaris's Instruction Set Attribute

It is easy!

# isainfo –v
64-bit sparcv9 applications
        vis
32-bit sparc applications
        vis v8plus div32 mul32

Tuesday, January 24, 2012

The MITRE Corporation: "Information Security Data Standards"

"The security and integrity of information systems is a critical issue within most types of organizations. Finding better ways to address the topic is the objective of many in industry, academia, and government. One of the more effective approaches gaining popularity in addressing these issues is the use of standard knowledge representations, enumerations, exchange formats and languages, as well as sharing of standard approaches to key compliance and conformance mandates. By standardizing and segregating the interactions amongst their operational, development and sustainment tools and processes organizations gain great freedom in selecting technologies, solutions and vendors. These "Making Security Measurable" initiatives provide the foundation for answering today’s increased demands for accountability, efficiency and interoperability without artificially constraining an organization’s solution options."

http://measurablesecurity.mitre.org/list/index.html

Installing a Network Card on Solaris x86

Prior to installing the physical network interface card (NIC), run the following commands:

# touch /reconfigure
# poweroff (init 5)

Make sure that the NIC is installed properly on PCI slot "X" of the PC’s motherboard.  Power on the machine and confirm that the BIOS is detecting the NIC card prior to driver installation.

To properly configure the new NIC, find out information about the installed NIC by performing the following command:

# /usr/bin/X11/scanpci

The command will output information about what devices are installed on your PCI slots.  To install the network card correctly, look for the vendor and device id.  The output should look like the following:

pci bus 0x0002 cardnum 0x00 function 0x00: vendor 0x14e4 device 0x165a 
   Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express

Add your driver alias by performing the following:
# add_drv –a –i ‘”14e4,165a”’ bge

OR

# vi /etc/driver_aliases
bge “14e4,165a”
wq!

# sys-unconfig
The machine will halt and then you will have to reconfigure your network card as well as other system related information.

Enable the network card
# ifconfig <DEVICE> plumb
# ifconfig <DEVICE> 192.168.1.xxx netmask 255.255.255.xxx up
# ifconfig –a

But wait if you just do that, next time you reboot, your entire configuration will be gone, to prevent this perform the following:

# vi /etc/hostname.<DEVICE>

Enter the following:
IP-address netmask + netmask
22.7.138.12 netmask + 255.255.255.224

Now edit /etc/inet/hosts & /etc/inet/ipnodes
IP-address hostname
22.7.138.12     jumpstart3

After that is complete, reboot -- -r (SPARC) or reboot (x86)
Then the next step that you can do is to add some routing to this configuration:
# route –p add default your-gateway
# route –p add default 192.168.2.1
-p = persistence over reboot
Default = all destination 0.0.0.0/0.0.0.0

This will try to register a default gateway for the PC, if you want to add another routing for this PC to specified network you can do:
# route –p add –net network-address –gateway gateway-address

I suggest you always to use –p so that your routing won’t be flushed when the machine is reboot. Also you need your gateway in /etc/defaultrouter.

To check your routing you can type:
# netstat –rn

To flush your routing, but the one that from installation won’t be flushed
# route flush

These configurations at least will make your Solaris box able to communicate with other machine.